00:00:00.000 Started by upstream project "autotest-nightly" build number 4338 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3701 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.085 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.145 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.200 Using shallow fetch with depth 1 00:00:00.200 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.200 > git --version # timeout=10 00:00:00.266 > git --version # 'git version 2.39.2' 00:00:00.266 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.301 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.301 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.972 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.985 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.998 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.998 > git config core.sparsecheckout # timeout=10 00:00:05.009 > git read-tree -mu HEAD # timeout=10 00:00:05.027 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.050 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.050 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.137 [Pipeline] Start of Pipeline 00:00:05.150 [Pipeline] library 00:00:05.151 Loading library shm_lib@master 00:00:05.151 Library shm_lib@master is cached. Copying from home. 00:00:05.167 [Pipeline] node 00:00:05.180 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:05.182 [Pipeline] { 00:00:05.191 [Pipeline] catchError 00:00:05.192 [Pipeline] { 00:00:05.206 [Pipeline] wrap 00:00:05.215 [Pipeline] { 00:00:05.224 [Pipeline] stage 00:00:05.226 [Pipeline] { (Prologue) 00:00:05.243 [Pipeline] echo 00:00:05.244 Node: VM-host-SM38 00:00:05.251 [Pipeline] cleanWs 00:00:05.260 [WS-CLEANUP] Deleting project workspace... 00:00:05.260 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.267 [WS-CLEANUP] done 00:00:05.470 [Pipeline] setCustomBuildProperty 00:00:05.563 [Pipeline] httpRequest 00:00:05.905 [Pipeline] echo 00:00:05.907 Sorcerer 10.211.164.20 is alive 00:00:05.916 [Pipeline] retry 00:00:05.918 [Pipeline] { 00:00:05.934 [Pipeline] httpRequest 00:00:05.938 HttpMethod: GET 00:00:05.938 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.939 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.941 Response Code: HTTP/1.1 200 OK 00:00:05.941 Success: Status code 200 is in the accepted range: 200,404 00:00:05.942 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.520 [Pipeline] } 00:00:06.534 [Pipeline] // retry 00:00:06.543 [Pipeline] sh 00:00:06.831 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.846 [Pipeline] httpRequest 00:00:07.410 [Pipeline] echo 00:00:07.412 Sorcerer 10.211.164.20 is alive 00:00:07.419 [Pipeline] retry 00:00:07.421 [Pipeline] { 00:00:07.467 [Pipeline] httpRequest 00:00:07.471 HttpMethod: GET 00:00:07.472 URL: http://10.211.164.20/packages/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:07.472 Sending request to url: http://10.211.164.20/packages/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:07.474 Response Code: HTTP/1.1 200 OK 00:00:07.474 Success: Status code 200 is in the accepted range: 200,404 00:00:07.475 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:37.090 [Pipeline] } 00:00:37.109 [Pipeline] // retry 00:00:37.118 [Pipeline] sh 00:00:37.404 + tar --no-same-owner -xf spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:40.711 [Pipeline] sh 00:00:40.990 + git -C spdk log --oneline -n5 00:00:40.990 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:00:40.990 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:00:40.990 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:00:40.990 e2dfdf06c accel/mlx5: Register post_poller handler 00:00:40.990 3c8001115 accel/mlx5: More precise condition to update DB 00:00:41.008 [Pipeline] writeFile 00:00:41.025 [Pipeline] sh 00:00:41.305 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:41.316 [Pipeline] sh 00:00:41.594 + cat autorun-spdk.conf 00:00:41.594 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.594 SPDK_TEST_NVME=1 00:00:41.594 SPDK_TEST_FTL=1 00:00:41.594 SPDK_TEST_ISAL=1 00:00:41.594 SPDK_RUN_ASAN=1 00:00:41.594 SPDK_RUN_UBSAN=1 00:00:41.594 SPDK_TEST_XNVME=1 00:00:41.594 SPDK_TEST_NVME_FDP=1 00:00:41.594 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:41.601 RUN_NIGHTLY=1 00:00:41.603 [Pipeline] } 00:00:41.617 [Pipeline] // stage 00:00:41.634 [Pipeline] stage 00:00:41.637 [Pipeline] { (Run VM) 00:00:41.652 [Pipeline] sh 00:00:41.930 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:41.930 + echo 'Start stage prepare_nvme.sh' 00:00:41.930 Start stage prepare_nvme.sh 00:00:41.930 + [[ -n 1 ]] 00:00:41.930 + disk_prefix=ex1 00:00:41.930 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:41.930 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:41.930 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:41.930 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.930 ++ SPDK_TEST_NVME=1 00:00:41.930 ++ SPDK_TEST_FTL=1 00:00:41.930 ++ SPDK_TEST_ISAL=1 00:00:41.930 ++ SPDK_RUN_ASAN=1 00:00:41.930 ++ SPDK_RUN_UBSAN=1 00:00:41.930 ++ SPDK_TEST_XNVME=1 00:00:41.930 ++ SPDK_TEST_NVME_FDP=1 00:00:41.930 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:41.930 ++ RUN_NIGHTLY=1 00:00:41.930 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:41.930 + nvme_files=() 00:00:41.930 + declare -A nvme_files 00:00:41.930 + backend_dir=/var/lib/libvirt/images/backends 00:00:41.930 + nvme_files['nvme.img']=5G 00:00:41.930 + nvme_files['nvme-cmb.img']=5G 00:00:41.930 + nvme_files['nvme-multi0.img']=4G 00:00:41.930 + nvme_files['nvme-multi1.img']=4G 00:00:41.930 + nvme_files['nvme-multi2.img']=4G 00:00:41.930 + nvme_files['nvme-openstack.img']=8G 00:00:41.930 + nvme_files['nvme-zns.img']=5G 00:00:41.930 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:41.930 + (( SPDK_TEST_FTL == 1 )) 00:00:41.930 + nvme_files["nvme-ftl.img"]=6G 00:00:41.930 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:41.930 + nvme_files["nvme-fdp.img"]=1G 00:00:41.930 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:41.930 + for nvme in "${!nvme_files[@]}" 00:00:41.930 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:41.930 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:41.930 + for nvme in "${!nvme_files[@]}" 00:00:41.930 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:00:42.189 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:42.189 + for nvme in "${!nvme_files[@]}" 00:00:42.189 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:42.189 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:42.189 + for nvme in "${!nvme_files[@]}" 00:00:42.189 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:42.189 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:42.189 + for nvme in "${!nvme_files[@]}" 00:00:42.189 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:42.447 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:42.447 + for nvme in "${!nvme_files[@]}" 00:00:42.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:42.447 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:42.447 + for nvme in "${!nvme_files[@]}" 00:00:42.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:42.447 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:42.447 + for nvme in "${!nvme_files[@]}" 00:00:42.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:00:42.447 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:42.447 + for nvme in "${!nvme_files[@]}" 00:00:42.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:42.447 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:42.447 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:42.447 + echo 'End stage prepare_nvme.sh' 00:00:42.447 End stage prepare_nvme.sh 00:00:42.458 [Pipeline] sh 00:00:42.735 + DISTRO=fedora39 00:00:42.735 + CPUS=10 00:00:42.735 + RAM=12288 00:00:42.735 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:42.735 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:42.735 00:00:42.735 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:42.735 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:42.735 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:42.735 HELP=0 00:00:42.735 DRY_RUN=0 00:00:42.736 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:00:42.736 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:42.736 NVME_AUTO_CREATE=0 00:00:42.736 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:00:42.736 NVME_CMB=,,,, 00:00:42.736 NVME_PMR=,,,, 00:00:42.736 NVME_ZNS=,,,, 00:00:42.736 NVME_MS=true,,,, 00:00:42.736 NVME_FDP=,,,on, 00:00:42.736 SPDK_VAGRANT_DISTRO=fedora39 00:00:42.736 SPDK_VAGRANT_VMCPU=10 00:00:42.736 SPDK_VAGRANT_VMRAM=12288 00:00:42.736 SPDK_VAGRANT_PROVIDER=libvirt 00:00:42.736 SPDK_VAGRANT_HTTP_PROXY= 00:00:42.736 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:42.736 SPDK_OPENSTACK_NETWORK=0 00:00:42.736 VAGRANT_PACKAGE_BOX=0 00:00:42.736 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:42.736 FORCE_DISTRO=true 00:00:42.736 VAGRANT_BOX_VERSION= 00:00:42.736 EXTRA_VAGRANTFILES= 00:00:42.736 NIC_MODEL=e1000 00:00:42.736 00:00:42.736 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:00:42.736 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:45.261 Bringing machine 'default' up with 'libvirt' provider... 00:00:45.519 ==> default: Creating image (snapshot of base box volume). 00:00:45.519 ==> default: Creating domain with the following settings... 00:00:45.519 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733441897_412bd1001edd112e28a7 00:00:45.519 ==> default: -- Domain type: kvm 00:00:45.519 ==> default: -- Cpus: 10 00:00:45.519 ==> default: -- Feature: acpi 00:00:45.519 ==> default: -- Feature: apic 00:00:45.519 ==> default: -- Feature: pae 00:00:45.519 ==> default: -- Memory: 12288M 00:00:45.519 ==> default: -- Memory Backing: hugepages: 00:00:45.519 ==> default: -- Management MAC: 00:00:45.519 ==> default: -- Loader: 00:00:45.519 ==> default: -- Nvram: 00:00:45.519 ==> default: -- Base box: spdk/fedora39 00:00:45.519 ==> default: -- Storage pool: default 00:00:45.519 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733441897_412bd1001edd112e28a7.img (20G) 00:00:45.519 ==> default: -- Volume Cache: default 00:00:45.519 ==> default: -- Kernel: 00:00:45.519 ==> default: -- Initrd: 00:00:45.519 ==> default: -- Graphics Type: vnc 00:00:45.519 ==> default: -- Graphics Port: -1 00:00:45.519 ==> default: -- Graphics IP: 127.0.0.1 00:00:45.519 ==> default: -- Graphics Password: Not defined 00:00:45.519 ==> default: -- Video Type: cirrus 00:00:45.519 ==> default: -- Video VRAM: 9216 00:00:45.519 ==> default: -- Sound Type: 00:00:45.519 ==> default: -- Keymap: en-us 00:00:45.519 ==> default: -- TPM Path: 00:00:45.519 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:45.519 ==> default: -- Command line args: 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:45.519 ==> default: -> value=-drive, 00:00:45.519 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:45.519 ==> default: -> value=-drive, 00:00:45.519 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:45.519 ==> default: -> value=-drive, 00:00:45.519 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.519 ==> default: -> value=-drive, 00:00:45.519 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.519 ==> default: -> value=-drive, 00:00:45.519 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:45.519 ==> default: -> value=-drive, 00:00:45.519 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:45.519 ==> default: -> value=-device, 00:00:45.519 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.777 ==> default: Creating shared folders metadata... 00:00:45.777 ==> default: Starting domain. 00:00:47.150 ==> default: Waiting for domain to get an IP address... 00:01:05.226 ==> default: Waiting for SSH to become available... 00:01:05.226 ==> default: Configuring and enabling network interfaces... 00:01:07.756 default: SSH address: 192.168.121.125:22 00:01:07.756 default: SSH username: vagrant 00:01:07.756 default: SSH auth method: private key 00:01:09.654 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:14.918 ==> default: Mounting SSHFS shared folder... 00:01:16.815 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:16.815 ==> default: Checking Mount.. 00:01:17.453 ==> default: Folder Successfully Mounted! 00:01:17.453 00:01:17.453 SUCCESS! 00:01:17.453 00:01:17.453 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:17.453 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.453 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:17.453 00:01:17.460 [Pipeline] } 00:01:17.475 [Pipeline] // stage 00:01:17.484 [Pipeline] dir 00:01:17.485 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:17.487 [Pipeline] { 00:01:17.499 [Pipeline] catchError 00:01:17.501 [Pipeline] { 00:01:17.517 [Pipeline] sh 00:01:17.800 + vagrant ssh-config --host vagrant 00:01:17.800 + sed -ne '/^Host/,$p' 00:01:17.800 + tee ssh_conf 00:01:20.328 Host vagrant 00:01:20.328 HostName 192.168.121.125 00:01:20.328 User vagrant 00:01:20.328 Port 22 00:01:20.328 UserKnownHostsFile /dev/null 00:01:20.328 StrictHostKeyChecking no 00:01:20.328 PasswordAuthentication no 00:01:20.328 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:20.328 IdentitiesOnly yes 00:01:20.328 LogLevel FATAL 00:01:20.328 ForwardAgent yes 00:01:20.328 ForwardX11 yes 00:01:20.328 00:01:20.341 [Pipeline] withEnv 00:01:20.344 [Pipeline] { 00:01:20.358 [Pipeline] sh 00:01:20.636 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:20.636 source /etc/os-release 00:01:20.636 [[ -e /image.version ]] && img=$(< /image.version) 00:01:20.636 # Minimal, systemd-like check. 00:01:20.636 if [[ -e /.dockerenv ]]; then 00:01:20.636 # Clear garbage from the node'\''s name: 00:01:20.636 # agt-er_autotest_547-896 -> autotest_547-896 00:01:20.636 # $HOSTNAME is the actual container id 00:01:20.636 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:20.636 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:20.636 # We can assume this is a mount from a host where container is running, 00:01:20.636 # so fetch its hostname to easily identify the target swarm worker. 00:01:20.636 container="$(< /etc/hostname) ($agent)" 00:01:20.636 else 00:01:20.636 # Fallback 00:01:20.636 container=$agent 00:01:20.636 fi 00:01:20.636 fi 00:01:20.636 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:20.636 ' 00:01:20.646 [Pipeline] } 00:01:20.663 [Pipeline] // withEnv 00:01:20.672 [Pipeline] setCustomBuildProperty 00:01:20.690 [Pipeline] stage 00:01:20.692 [Pipeline] { (Tests) 00:01:20.713 [Pipeline] sh 00:01:20.992 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:21.006 [Pipeline] sh 00:01:21.282 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:21.296 [Pipeline] timeout 00:01:21.296 Timeout set to expire in 50 min 00:01:21.299 [Pipeline] { 00:01:21.313 [Pipeline] sh 00:01:21.592 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:21.850 HEAD is now at a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:21.861 [Pipeline] sh 00:01:22.138 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:22.152 [Pipeline] sh 00:01:22.430 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:22.446 [Pipeline] sh 00:01:22.721 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:22.721 ++ readlink -f spdk_repo 00:01:22.721 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:22.721 + [[ -n /home/vagrant/spdk_repo ]] 00:01:22.721 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:22.721 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:22.721 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:22.721 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:22.721 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:22.721 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:22.721 + cd /home/vagrant/spdk_repo 00:01:22.721 + source /etc/os-release 00:01:22.721 ++ NAME='Fedora Linux' 00:01:22.721 ++ VERSION='39 (Cloud Edition)' 00:01:22.721 ++ ID=fedora 00:01:22.721 ++ VERSION_ID=39 00:01:22.721 ++ VERSION_CODENAME= 00:01:22.721 ++ PLATFORM_ID=platform:f39 00:01:22.721 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:22.721 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:22.721 ++ LOGO=fedora-logo-icon 00:01:22.721 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:22.721 ++ HOME_URL=https://fedoraproject.org/ 00:01:22.721 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:22.721 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:22.722 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:22.722 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:22.722 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:22.722 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:22.722 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:22.722 ++ SUPPORT_END=2024-11-12 00:01:22.722 ++ VARIANT='Cloud Edition' 00:01:22.722 ++ VARIANT_ID=cloud 00:01:22.722 + uname -a 00:01:22.722 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:22.722 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:22.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:23.234 Hugepages 00:01:23.234 node hugesize free / total 00:01:23.234 node0 1048576kB 0 / 0 00:01:23.234 node0 2048kB 0 / 0 00:01:23.234 00:01:23.234 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.493 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:23.493 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:23.493 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:23.493 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:23.493 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:23.493 + rm -f /tmp/spdk-ld-path 00:01:23.493 + source autorun-spdk.conf 00:01:23.493 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.493 ++ SPDK_TEST_NVME=1 00:01:23.493 ++ SPDK_TEST_FTL=1 00:01:23.493 ++ SPDK_TEST_ISAL=1 00:01:23.493 ++ SPDK_RUN_ASAN=1 00:01:23.493 ++ SPDK_RUN_UBSAN=1 00:01:23.493 ++ SPDK_TEST_XNVME=1 00:01:23.493 ++ SPDK_TEST_NVME_FDP=1 00:01:23.493 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.493 ++ RUN_NIGHTLY=1 00:01:23.493 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.493 + [[ -n '' ]] 00:01:23.493 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:23.493 + for M in /var/spdk/build-*-manifest.txt 00:01:23.493 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.493 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.493 + for M in /var/spdk/build-*-manifest.txt 00:01:23.493 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.493 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.493 + for M in /var/spdk/build-*-manifest.txt 00:01:23.493 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.493 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.493 ++ uname 00:01:23.493 + [[ Linux == \L\i\n\u\x ]] 00:01:23.493 + sudo dmesg -T 00:01:23.493 + sudo dmesg --clear 00:01:23.493 + dmesg_pid=5022 00:01:23.493 + [[ Fedora Linux == FreeBSD ]] 00:01:23.493 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.493 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.493 + sudo dmesg -Tw 00:01:23.493 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.493 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.493 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.493 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.493 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.493 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.493 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.493 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.493 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.493 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.493 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.493 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.493 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.493 23:38:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.493 23:38:56 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.493 23:38:56 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.493 23:38:56 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:23.493 23:38:56 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:23.493 23:38:56 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:23.493 23:38:56 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:23.493 23:38:56 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:23.493 23:38:56 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:23.493 23:38:56 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:23.493 23:38:56 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.493 23:38:56 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:23.493 23:38:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:23.493 23:38:56 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.493 23:38:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.493 23:38:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:23.493 23:38:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:23.493 23:38:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.493 23:38:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.493 23:38:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.493 23:38:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.493 23:38:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.493 23:38:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.493 23:38:56 -- paths/export.sh@5 -- $ export PATH 00:01:23.493 23:38:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.493 23:38:56 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:23.493 23:38:56 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:23.493 23:38:56 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733441936.XXXXXX 00:01:23.751 23:38:56 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733441936.1ZOh3V 00:01:23.751 23:38:56 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:23.751 23:38:56 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:23.751 23:38:56 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:23.751 23:38:56 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:23.751 23:38:56 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.751 23:38:56 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:23.751 23:38:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:23.751 23:38:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.751 23:38:56 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:23.751 23:38:56 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:23.751 23:38:56 -- pm/common@17 -- $ local monitor 00:01:23.751 23:38:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.751 23:38:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.751 23:38:56 -- pm/common@25 -- $ sleep 1 00:01:23.751 23:38:56 -- pm/common@21 -- $ date +%s 00:01:23.751 23:38:56 -- pm/common@21 -- $ date +%s 00:01:23.751 23:38:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733441936 00:01:23.751 23:38:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733441936 00:01:23.751 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733441936_collect-vmstat.pm.log 00:01:23.751 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733441936_collect-cpu-load.pm.log 00:01:24.686 23:38:57 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:24.686 23:38:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.686 23:38:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.686 23:38:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:24.686 23:38:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.686 Thu Dec 5 11:38:57 PM UTC 2024 00:01:24.686 23:38:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.686 v25.01-pre-303-ga5e6ecf28 00:01:24.686 23:38:57 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:24.686 23:38:57 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:24.686 23:38:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.686 23:38:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.686 23:38:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.686 ************************************ 00:01:24.686 START TEST asan 00:01:24.686 ************************************ 00:01:24.686 using asan 00:01:24.686 23:38:57 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:24.686 00:01:24.686 real 0m0.000s 00:01:24.686 user 0m0.000s 00:01:24.686 sys 0m0.000s 00:01:24.686 23:38:57 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.686 23:38:57 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.686 ************************************ 00:01:24.686 END TEST asan 00:01:24.686 ************************************ 00:01:24.686 23:38:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.686 23:38:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.686 23:38:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.686 23:38:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.686 23:38:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.686 ************************************ 00:01:24.686 START TEST ubsan 00:01:24.686 ************************************ 00:01:24.686 using ubsan 00:01:24.686 23:38:57 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:24.686 00:01:24.686 real 0m0.000s 00:01:24.686 user 0m0.000s 00:01:24.686 sys 0m0.000s 00:01:24.686 23:38:57 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.686 23:38:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.686 ************************************ 00:01:24.686 END TEST ubsan 00:01:24.686 ************************************ 00:01:24.686 23:38:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.686 23:38:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.686 23:38:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.686 23:38:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.686 23:38:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.686 23:38:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.686 23:38:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.686 23:38:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.686 23:38:57 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:24.686 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:24.686 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:25.251 Using 'verbs' RDMA provider 00:01:35.777 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:45.829 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:45.829 Creating mk/config.mk...done. 00:01:45.829 Creating mk/cc.flags.mk...done. 00:01:45.829 Type 'make' to build. 00:01:45.829 23:39:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:45.829 23:39:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:45.829 23:39:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:45.829 23:39:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.829 ************************************ 00:01:45.829 START TEST make 00:01:45.829 ************************************ 00:01:45.829 23:39:18 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:45.829 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:45.829 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:45.829 meson setup builddir \ 00:01:45.829 -Dwith-libaio=enabled \ 00:01:45.829 -Dwith-liburing=enabled \ 00:01:45.829 -Dwith-libvfn=disabled \ 00:01:45.829 -Dwith-spdk=disabled \ 00:01:45.829 -Dexamples=false \ 00:01:45.829 -Dtests=false \ 00:01:45.829 -Dtools=false && \ 00:01:45.829 meson compile -C builddir && \ 00:01:45.829 cd -) 00:01:45.829 make[1]: Nothing to be done for 'all'. 00:01:47.730 The Meson build system 00:01:47.730 Version: 1.5.0 00:01:47.730 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:47.730 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:47.730 Build type: native build 00:01:47.730 Project name: xnvme 00:01:47.730 Project version: 0.7.5 00:01:47.730 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:47.730 C linker for the host machine: cc ld.bfd 2.40-14 00:01:47.730 Host machine cpu family: x86_64 00:01:47.730 Host machine cpu: x86_64 00:01:47.730 Message: host_machine.system: linux 00:01:47.730 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:47.730 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:47.730 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:47.730 Run-time dependency threads found: YES 00:01:47.730 Has header "setupapi.h" : NO 00:01:47.730 Has header "linux/blkzoned.h" : YES 00:01:47.730 Has header "linux/blkzoned.h" : YES (cached) 00:01:47.730 Has header "libaio.h" : YES 00:01:47.730 Library aio found: YES 00:01:47.730 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:47.730 Run-time dependency liburing found: YES 2.2 00:01:47.730 Dependency libvfn skipped: feature with-libvfn disabled 00:01:47.730 Found CMake: /usr/bin/cmake (3.27.7) 00:01:47.730 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:01:47.730 Subproject spdk : skipped: feature with-spdk disabled 00:01:47.730 Run-time dependency appleframeworks found: NO (tried framework) 00:01:47.730 Run-time dependency appleframeworks found: NO (tried framework) 00:01:47.730 Library rt found: YES 00:01:47.730 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:47.730 Configuring xnvme_config.h using configuration 00:01:47.730 Configuring xnvme.spec using configuration 00:01:47.730 Run-time dependency bash-completion found: YES 2.11 00:01:47.730 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:47.730 Program cp found: YES (/usr/bin/cp) 00:01:47.730 Build targets in project: 3 00:01:47.730 00:01:47.730 xnvme 0.7.5 00:01:47.730 00:01:47.730 Subprojects 00:01:47.730 spdk : NO Feature 'with-spdk' disabled 00:01:47.730 00:01:47.730 User defined options 00:01:47.730 examples : false 00:01:47.730 tests : false 00:01:47.730 tools : false 00:01:47.730 with-libaio : enabled 00:01:47.730 with-liburing: enabled 00:01:47.730 with-libvfn : disabled 00:01:47.730 with-spdk : disabled 00:01:47.730 00:01:47.730 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.300 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:48.300 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:01:48.300 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:01:48.300 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:01:48.300 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:01:48.300 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:01:48.300 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:01:48.300 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:01:48.300 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:01:48.300 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:01:48.300 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:01:48.300 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:01:48.300 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:01:48.300 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:01:48.561 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:01:48.561 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:01:48.561 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:01:48.561 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:01:48.561 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:01:48.561 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:01:48.561 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:01:48.561 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:01:48.561 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:01:48.561 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:01:48.561 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:01:48.561 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:01:48.561 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:01:48.561 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:01:48.561 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:01:48.561 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:01:48.561 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:01:48.561 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:01:48.561 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:01:48.561 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:01:48.561 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:01:48.561 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:01:48.561 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:01:48.561 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:01:48.561 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:01:48.561 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:01:48.561 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:01:48.561 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:01:48.561 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:01:48.561 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:01:48.561 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:01:48.561 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:01:48.561 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:01:48.823 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:01:48.823 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:01:48.823 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:01:48.823 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:01:48.823 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:01:48.823 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:01:48.823 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:01:48.823 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:01:48.823 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:01:48.823 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:01:48.823 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:01:48.823 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:01:48.823 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:01:48.823 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:01:48.823 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:01:48.823 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:01:48.823 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:01:48.823 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:01:48.823 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:01:48.823 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:01:48.823 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:01:48.823 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:01:49.083 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:01:49.083 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:01:49.083 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:01:49.083 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:01:49.083 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:01:49.344 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:01:49.344 [75/76] Linking static target lib/libxnvme.a 00:01:49.344 [76/76] Linking target lib/libxnvme.so.0.7.5 00:01:49.344 INFO: autodetecting backend as ninja 00:01:49.344 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:49.344 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:01:57.506 The Meson build system 00:01:57.506 Version: 1.5.0 00:01:57.506 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:57.506 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:57.506 Build type: native build 00:01:57.506 Program cat found: YES (/usr/bin/cat) 00:01:57.506 Project name: DPDK 00:01:57.506 Project version: 24.03.0 00:01:57.506 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:57.506 C linker for the host machine: cc ld.bfd 2.40-14 00:01:57.506 Host machine cpu family: x86_64 00:01:57.506 Host machine cpu: x86_64 00:01:57.506 Message: ## Building in Developer Mode ## 00:01:57.506 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.506 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:57.506 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.506 Program python3 found: YES (/usr/bin/python3) 00:01:57.506 Program cat found: YES (/usr/bin/cat) 00:01:57.506 Compiler for C supports arguments -march=native: YES 00:01:57.506 Checking for size of "void *" : 8 00:01:57.506 Checking for size of "void *" : 8 (cached) 00:01:57.506 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:57.506 Library m found: YES 00:01:57.506 Library numa found: YES 00:01:57.506 Has header "numaif.h" : YES 00:01:57.506 Library fdt found: NO 00:01:57.506 Library execinfo found: NO 00:01:57.506 Has header "execinfo.h" : YES 00:01:57.506 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:57.506 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.506 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.507 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.507 Run-time dependency openssl found: YES 3.1.1 00:01:57.507 Run-time dependency libpcap found: YES 1.10.4 00:01:57.507 Has header "pcap.h" with dependency libpcap: YES 00:01:57.507 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.507 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.507 Compiler for C supports arguments -Wformat: YES 00:01:57.507 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.507 Compiler for C supports arguments -Wformat-security: NO 00:01:57.507 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.507 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.507 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.507 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.507 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.507 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.507 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.507 Compiler for C supports arguments -Wundef: YES 00:01:57.507 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.507 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.507 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.507 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.507 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.507 Program objdump found: YES (/usr/bin/objdump) 00:01:57.507 Compiler for C supports arguments -mavx512f: YES 00:01:57.507 Checking if "AVX512 checking" compiles: YES 00:01:57.507 Fetching value of define "__SSE4_2__" : 1 00:01:57.507 Fetching value of define "__AES__" : 1 00:01:57.507 Fetching value of define "__AVX__" : 1 00:01:57.507 Fetching value of define "__AVX2__" : 1 00:01:57.507 Fetching value of define "__AVX512BW__" : 1 00:01:57.507 Fetching value of define "__AVX512CD__" : 1 00:01:57.507 Fetching value of define "__AVX512DQ__" : 1 00:01:57.507 Fetching value of define "__AVX512F__" : 1 00:01:57.507 Fetching value of define "__AVX512VL__" : 1 00:01:57.507 Fetching value of define "__PCLMUL__" : 1 00:01:57.507 Fetching value of define "__RDRND__" : 1 00:01:57.507 Fetching value of define "__RDSEED__" : 1 00:01:57.507 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:57.507 Fetching value of define "__znver1__" : (undefined) 00:01:57.507 Fetching value of define "__znver2__" : (undefined) 00:01:57.507 Fetching value of define "__znver3__" : (undefined) 00:01:57.507 Fetching value of define "__znver4__" : (undefined) 00:01:57.507 Library asan found: YES 00:01:57.507 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.507 Message: lib/log: Defining dependency "log" 00:01:57.507 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.507 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.507 Library rt found: YES 00:01:57.507 Checking for function "getentropy" : NO 00:01:57.507 Message: lib/eal: Defining dependency "eal" 00:01:57.507 Message: lib/ring: Defining dependency "ring" 00:01:57.507 Message: lib/rcu: Defining dependency "rcu" 00:01:57.507 Message: lib/mempool: Defining dependency "mempool" 00:01:57.507 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.507 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.507 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.507 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.507 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:57.507 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:57.507 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:57.507 Compiler for C supports arguments -mpclmul: YES 00:01:57.507 Compiler for C supports arguments -maes: YES 00:01:57.507 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.507 Compiler for C supports arguments -mavx512bw: YES 00:01:57.507 Compiler for C supports arguments -mavx512dq: YES 00:01:57.507 Compiler for C supports arguments -mavx512vl: YES 00:01:57.507 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.507 Compiler for C supports arguments -mavx2: YES 00:01:57.507 Compiler for C supports arguments -mavx: YES 00:01:57.507 Message: lib/net: Defining dependency "net" 00:01:57.507 Message: lib/meter: Defining dependency "meter" 00:01:57.507 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.507 Message: lib/pci: Defining dependency "pci" 00:01:57.507 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.507 Message: lib/hash: Defining dependency "hash" 00:01:57.507 Message: lib/timer: Defining dependency "timer" 00:01:57.507 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.507 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.507 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.507 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.507 Message: lib/power: Defining dependency "power" 00:01:57.507 Message: lib/reorder: Defining dependency "reorder" 00:01:57.507 Message: lib/security: Defining dependency "security" 00:01:57.507 Has header "linux/userfaultfd.h" : YES 00:01:57.507 Has header "linux/vduse.h" : YES 00:01:57.507 Message: lib/vhost: Defining dependency "vhost" 00:01:57.507 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.507 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.507 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.507 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.507 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.507 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.507 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.507 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.507 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.507 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.507 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:57.507 Configuring doxy-api-html.conf using configuration 00:01:57.507 Configuring doxy-api-man.conf using configuration 00:01:57.507 Program mandb found: YES (/usr/bin/mandb) 00:01:57.507 Program sphinx-build found: NO 00:01:57.507 Configuring rte_build_config.h using configuration 00:01:57.507 Message: 00:01:57.507 ================= 00:01:57.507 Applications Enabled 00:01:57.507 ================= 00:01:57.507 00:01:57.507 apps: 00:01:57.507 00:01:57.507 00:01:57.507 Message: 00:01:57.507 ================= 00:01:57.507 Libraries Enabled 00:01:57.507 ================= 00:01:57.507 00:01:57.507 libs: 00:01:57.507 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.507 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.507 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.507 00:01:57.507 Message: 00:01:57.507 =============== 00:01:57.507 Drivers Enabled 00:01:57.507 =============== 00:01:57.507 00:01:57.507 common: 00:01:57.507 00:01:57.507 bus: 00:01:57.508 pci, vdev, 00:01:57.508 mempool: 00:01:57.508 ring, 00:01:57.508 dma: 00:01:57.508 00:01:57.508 net: 00:01:57.508 00:01:57.508 crypto: 00:01:57.508 00:01:57.508 compress: 00:01:57.508 00:01:57.508 vdpa: 00:01:57.508 00:01:57.508 00:01:57.508 Message: 00:01:57.508 ================= 00:01:57.508 Content Skipped 00:01:57.508 ================= 00:01:57.508 00:01:57.508 apps: 00:01:57.508 dumpcap: explicitly disabled via build config 00:01:57.508 graph: explicitly disabled via build config 00:01:57.508 pdump: explicitly disabled via build config 00:01:57.508 proc-info: explicitly disabled via build config 00:01:57.508 test-acl: explicitly disabled via build config 00:01:57.508 test-bbdev: explicitly disabled via build config 00:01:57.508 test-cmdline: explicitly disabled via build config 00:01:57.508 test-compress-perf: explicitly disabled via build config 00:01:57.508 test-crypto-perf: explicitly disabled via build config 00:01:57.508 test-dma-perf: explicitly disabled via build config 00:01:57.508 test-eventdev: explicitly disabled via build config 00:01:57.508 test-fib: explicitly disabled via build config 00:01:57.508 test-flow-perf: explicitly disabled via build config 00:01:57.508 test-gpudev: explicitly disabled via build config 00:01:57.508 test-mldev: explicitly disabled via build config 00:01:57.508 test-pipeline: explicitly disabled via build config 00:01:57.508 test-pmd: explicitly disabled via build config 00:01:57.508 test-regex: explicitly disabled via build config 00:01:57.508 test-sad: explicitly disabled via build config 00:01:57.508 test-security-perf: explicitly disabled via build config 00:01:57.508 00:01:57.508 libs: 00:01:57.508 argparse: explicitly disabled via build config 00:01:57.508 metrics: explicitly disabled via build config 00:01:57.508 acl: explicitly disabled via build config 00:01:57.508 bbdev: explicitly disabled via build config 00:01:57.508 bitratestats: explicitly disabled via build config 00:01:57.508 bpf: explicitly disabled via build config 00:01:57.508 cfgfile: explicitly disabled via build config 00:01:57.508 distributor: explicitly disabled via build config 00:01:57.508 efd: explicitly disabled via build config 00:01:57.508 eventdev: explicitly disabled via build config 00:01:57.508 dispatcher: explicitly disabled via build config 00:01:57.508 gpudev: explicitly disabled via build config 00:01:57.508 gro: explicitly disabled via build config 00:01:57.508 gso: explicitly disabled via build config 00:01:57.508 ip_frag: explicitly disabled via build config 00:01:57.508 jobstats: explicitly disabled via build config 00:01:57.508 latencystats: explicitly disabled via build config 00:01:57.508 lpm: explicitly disabled via build config 00:01:57.508 member: explicitly disabled via build config 00:01:57.508 pcapng: explicitly disabled via build config 00:01:57.508 rawdev: explicitly disabled via build config 00:01:57.508 regexdev: explicitly disabled via build config 00:01:57.508 mldev: explicitly disabled via build config 00:01:57.508 rib: explicitly disabled via build config 00:01:57.508 sched: explicitly disabled via build config 00:01:57.508 stack: explicitly disabled via build config 00:01:57.508 ipsec: explicitly disabled via build config 00:01:57.508 pdcp: explicitly disabled via build config 00:01:57.508 fib: explicitly disabled via build config 00:01:57.508 port: explicitly disabled via build config 00:01:57.508 pdump: explicitly disabled via build config 00:01:57.508 table: explicitly disabled via build config 00:01:57.508 pipeline: explicitly disabled via build config 00:01:57.508 graph: explicitly disabled via build config 00:01:57.508 node: explicitly disabled via build config 00:01:57.508 00:01:57.508 drivers: 00:01:57.508 common/cpt: not in enabled drivers build config 00:01:57.508 common/dpaax: not in enabled drivers build config 00:01:57.508 common/iavf: not in enabled drivers build config 00:01:57.508 common/idpf: not in enabled drivers build config 00:01:57.508 common/ionic: not in enabled drivers build config 00:01:57.508 common/mvep: not in enabled drivers build config 00:01:57.508 common/octeontx: not in enabled drivers build config 00:01:57.508 bus/auxiliary: not in enabled drivers build config 00:01:57.508 bus/cdx: not in enabled drivers build config 00:01:57.508 bus/dpaa: not in enabled drivers build config 00:01:57.508 bus/fslmc: not in enabled drivers build config 00:01:57.508 bus/ifpga: not in enabled drivers build config 00:01:57.508 bus/platform: not in enabled drivers build config 00:01:57.508 bus/uacce: not in enabled drivers build config 00:01:57.508 bus/vmbus: not in enabled drivers build config 00:01:57.508 common/cnxk: not in enabled drivers build config 00:01:57.508 common/mlx5: not in enabled drivers build config 00:01:57.508 common/nfp: not in enabled drivers build config 00:01:57.508 common/nitrox: not in enabled drivers build config 00:01:57.508 common/qat: not in enabled drivers build config 00:01:57.508 common/sfc_efx: not in enabled drivers build config 00:01:57.508 mempool/bucket: not in enabled drivers build config 00:01:57.508 mempool/cnxk: not in enabled drivers build config 00:01:57.508 mempool/dpaa: not in enabled drivers build config 00:01:57.508 mempool/dpaa2: not in enabled drivers build config 00:01:57.508 mempool/octeontx: not in enabled drivers build config 00:01:57.508 mempool/stack: not in enabled drivers build config 00:01:57.508 dma/cnxk: not in enabled drivers build config 00:01:57.508 dma/dpaa: not in enabled drivers build config 00:01:57.508 dma/dpaa2: not in enabled drivers build config 00:01:57.508 dma/hisilicon: not in enabled drivers build config 00:01:57.508 dma/idxd: not in enabled drivers build config 00:01:57.508 dma/ioat: not in enabled drivers build config 00:01:57.508 dma/skeleton: not in enabled drivers build config 00:01:57.508 net/af_packet: not in enabled drivers build config 00:01:57.508 net/af_xdp: not in enabled drivers build config 00:01:57.508 net/ark: not in enabled drivers build config 00:01:57.508 net/atlantic: not in enabled drivers build config 00:01:57.508 net/avp: not in enabled drivers build config 00:01:57.508 net/axgbe: not in enabled drivers build config 00:01:57.508 net/bnx2x: not in enabled drivers build config 00:01:57.508 net/bnxt: not in enabled drivers build config 00:01:57.508 net/bonding: not in enabled drivers build config 00:01:57.508 net/cnxk: not in enabled drivers build config 00:01:57.508 net/cpfl: not in enabled drivers build config 00:01:57.508 net/cxgbe: not in enabled drivers build config 00:01:57.508 net/dpaa: not in enabled drivers build config 00:01:57.508 net/dpaa2: not in enabled drivers build config 00:01:57.508 net/e1000: not in enabled drivers build config 00:01:57.508 net/ena: not in enabled drivers build config 00:01:57.508 net/enetc: not in enabled drivers build config 00:01:57.508 net/enetfec: not in enabled drivers build config 00:01:57.508 net/enic: not in enabled drivers build config 00:01:57.508 net/failsafe: not in enabled drivers build config 00:01:57.508 net/fm10k: not in enabled drivers build config 00:01:57.508 net/gve: not in enabled drivers build config 00:01:57.508 net/hinic: not in enabled drivers build config 00:01:57.508 net/hns3: not in enabled drivers build config 00:01:57.508 net/i40e: not in enabled drivers build config 00:01:57.508 net/iavf: not in enabled drivers build config 00:01:57.508 net/ice: not in enabled drivers build config 00:01:57.508 net/idpf: not in enabled drivers build config 00:01:57.508 net/igc: not in enabled drivers build config 00:01:57.508 net/ionic: not in enabled drivers build config 00:01:57.508 net/ipn3ke: not in enabled drivers build config 00:01:57.508 net/ixgbe: not in enabled drivers build config 00:01:57.508 net/mana: not in enabled drivers build config 00:01:57.508 net/memif: not in enabled drivers build config 00:01:57.508 net/mlx4: not in enabled drivers build config 00:01:57.508 net/mlx5: not in enabled drivers build config 00:01:57.508 net/mvneta: not in enabled drivers build config 00:01:57.508 net/mvpp2: not in enabled drivers build config 00:01:57.508 net/netvsc: not in enabled drivers build config 00:01:57.508 net/nfb: not in enabled drivers build config 00:01:57.508 net/nfp: not in enabled drivers build config 00:01:57.508 net/ngbe: not in enabled drivers build config 00:01:57.508 net/null: not in enabled drivers build config 00:01:57.508 net/octeontx: not in enabled drivers build config 00:01:57.508 net/octeon_ep: not in enabled drivers build config 00:01:57.508 net/pcap: not in enabled drivers build config 00:01:57.508 net/pfe: not in enabled drivers build config 00:01:57.508 net/qede: not in enabled drivers build config 00:01:57.508 net/ring: not in enabled drivers build config 00:01:57.508 net/sfc: not in enabled drivers build config 00:01:57.508 net/softnic: not in enabled drivers build config 00:01:57.508 net/tap: not in enabled drivers build config 00:01:57.508 net/thunderx: not in enabled drivers build config 00:01:57.508 net/txgbe: not in enabled drivers build config 00:01:57.508 net/vdev_netvsc: not in enabled drivers build config 00:01:57.508 net/vhost: not in enabled drivers build config 00:01:57.508 net/virtio: not in enabled drivers build config 00:01:57.508 net/vmxnet3: not in enabled drivers build config 00:01:57.508 raw/*: missing internal dependency, "rawdev" 00:01:57.508 crypto/armv8: not in enabled drivers build config 00:01:57.509 crypto/bcmfs: not in enabled drivers build config 00:01:57.509 crypto/caam_jr: not in enabled drivers build config 00:01:57.509 crypto/ccp: not in enabled drivers build config 00:01:57.509 crypto/cnxk: not in enabled drivers build config 00:01:57.509 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.509 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.509 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.509 crypto/mlx5: not in enabled drivers build config 00:01:57.509 crypto/mvsam: not in enabled drivers build config 00:01:57.509 crypto/nitrox: not in enabled drivers build config 00:01:57.509 crypto/null: not in enabled drivers build config 00:01:57.509 crypto/octeontx: not in enabled drivers build config 00:01:57.509 crypto/openssl: not in enabled drivers build config 00:01:57.509 crypto/scheduler: not in enabled drivers build config 00:01:57.509 crypto/uadk: not in enabled drivers build config 00:01:57.509 crypto/virtio: not in enabled drivers build config 00:01:57.509 compress/isal: not in enabled drivers build config 00:01:57.509 compress/mlx5: not in enabled drivers build config 00:01:57.509 compress/nitrox: not in enabled drivers build config 00:01:57.509 compress/octeontx: not in enabled drivers build config 00:01:57.509 compress/zlib: not in enabled drivers build config 00:01:57.509 regex/*: missing internal dependency, "regexdev" 00:01:57.509 ml/*: missing internal dependency, "mldev" 00:01:57.509 vdpa/ifc: not in enabled drivers build config 00:01:57.509 vdpa/mlx5: not in enabled drivers build config 00:01:57.509 vdpa/nfp: not in enabled drivers build config 00:01:57.509 vdpa/sfc: not in enabled drivers build config 00:01:57.509 event/*: missing internal dependency, "eventdev" 00:01:57.509 baseband/*: missing internal dependency, "bbdev" 00:01:57.509 gpu/*: missing internal dependency, "gpudev" 00:01:57.509 00:01:57.509 00:01:57.509 Build targets in project: 84 00:01:57.509 00:01:57.509 DPDK 24.03.0 00:01:57.509 00:01:57.509 User defined options 00:01:57.509 buildtype : debug 00:01:57.509 default_library : shared 00:01:57.509 libdir : lib 00:01:57.509 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:57.509 b_sanitize : address 00:01:57.509 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:57.509 c_link_args : 00:01:57.509 cpu_instruction_set: native 00:01:57.509 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:57.509 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:57.509 enable_docs : false 00:01:57.509 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:57.509 enable_kmods : false 00:01:57.509 max_lcores : 128 00:01:57.509 tests : false 00:01:57.509 00:01:57.509 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.509 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:57.509 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.509 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.509 [3/267] Linking static target lib/librte_kvargs.a 00:01:57.509 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.509 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:57.509 [6/267] Linking static target lib/librte_log.a 00:01:57.509 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.509 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.509 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.509 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.509 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.509 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:57.509 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.509 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.509 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.509 [16/267] Linking static target lib/librte_telemetry.a 00:01:57.509 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:57.509 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:57.770 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.770 [20/267] Linking target lib/librte_log.so.24.1 00:01:57.770 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:57.770 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.770 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:57.770 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:57.770 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.770 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.770 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.032 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:58.032 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.032 [30/267] Linking target lib/librte_kvargs.so.24.1 00:01:58.032 [31/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.032 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.032 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.032 [34/267] Linking target lib/librte_telemetry.so.24.1 00:01:58.293 [35/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:58.293 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:58.293 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:58.293 [38/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:58.293 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.293 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.293 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.551 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:58.551 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.551 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.551 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.551 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.551 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:58.810 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.810 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.810 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.810 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.810 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.810 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:59.069 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:59.069 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:59.069 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:59.069 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:59.069 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:59.069 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:59.330 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:59.330 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:59.330 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:59.330 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:59.330 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:59.330 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:59.330 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:59.592 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:59.592 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:59.592 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:59.592 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:59.592 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:59.592 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:59.850 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:59.850 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:59.850 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:59.850 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:59.850 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.109 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.109 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:00.109 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.109 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.109 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:00.109 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:00.109 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.109 [85/267] Linking static target lib/librte_ring.a 00:02:00.109 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:00.109 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:00.368 [88/267] Linking static target lib/librte_eal.a 00:02:00.368 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.368 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:00.368 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.368 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:00.627 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.627 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:00.627 [95/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.627 [96/267] Linking static target lib/librte_rcu.a 00:02:00.627 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:00.627 [98/267] Linking static target lib/librte_mempool.a 00:02:00.627 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:00.627 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:00.884 [101/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:00.884 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:00.884 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:00.884 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:00.884 [105/267] Linking static target lib/librte_mbuf.a 00:02:00.884 [106/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.884 [107/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:00.884 [108/267] Linking static target lib/librte_net.a 00:02:01.142 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.142 [110/267] Linking static target lib/librte_meter.a 00:02:01.142 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.142 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.142 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.399 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.399 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.399 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.399 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.399 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.399 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.656 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.914 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.914 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.914 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:01.914 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.914 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.914 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.914 [127/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:01.914 [128/267] Linking static target lib/librte_pci.a 00:02:01.914 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:02.206 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:02.206 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:02.206 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:02.206 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:02.206 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:02.206 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.206 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:02.206 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.206 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:02.206 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:02.206 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:02.206 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:02.206 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:02.206 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:02.469 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:02.469 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:02.469 [146/267] Linking static target lib/librte_cmdline.a 00:02:02.470 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:02.732 [148/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:02.732 [149/267] Linking static target lib/librte_timer.a 00:02:02.732 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:02.732 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:02.732 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:02.732 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:02.732 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:02.989 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:02.989 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:02.989 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:02.989 [158/267] Linking static target lib/librte_compressdev.a 00:02:02.989 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.246 [160/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.246 [161/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:03.246 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:03.246 [163/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.246 [164/267] Linking static target lib/librte_dmadev.a 00:02:03.505 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:03.505 [166/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:03.505 [167/267] Linking static target lib/librte_ethdev.a 00:02:03.505 [168/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:03.505 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:03.505 [170/267] Linking static target lib/librte_hash.a 00:02:03.505 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:03.505 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.505 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:03.776 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.776 [175/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:03.776 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:03.776 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:03.776 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:04.033 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:04.033 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.033 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:04.033 [182/267] Linking static target lib/librte_power.a 00:02:04.033 [183/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.290 [184/267] Linking static target lib/librte_cryptodev.a 00:02:04.290 [185/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:04.290 [186/267] Linking static target lib/librte_reorder.a 00:02:04.290 [187/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.290 [188/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:04.290 [189/267] Linking static target lib/librte_security.a 00:02:04.290 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:04.551 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:04.551 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:04.809 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.809 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.067 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.067 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.067 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.067 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:05.067 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:05.324 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:05.324 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.324 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:05.581 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:05.581 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:05.581 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:05.581 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:05.582 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:05.582 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:05.582 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:05.839 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:05.839 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:05.839 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.839 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.839 [214/267] Linking static target drivers/librte_bus_vdev.a 00:02:05.839 [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.839 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.839 [217/267] Linking static target drivers/librte_bus_pci.a 00:02:06.096 [218/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.096 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:06.096 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:06.096 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.096 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:06.356 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:06.356 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:06.356 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:06.356 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.614 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:07.564 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.564 [229/267] Linking target lib/librte_eal.so.24.1 00:02:07.826 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:07.826 [231/267] Linking target lib/librte_pci.so.24.1 00:02:07.826 [232/267] Linking target lib/librte_ring.so.24.1 00:02:07.826 [233/267] Linking target lib/librte_timer.so.24.1 00:02:07.826 [234/267] Linking target lib/librte_meter.so.24.1 00:02:07.826 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:07.826 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:07.826 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:07.826 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:07.826 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:07.826 [240/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:07.826 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:08.085 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:08.085 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:08.085 [244/267] Linking target lib/librte_rcu.so.24.1 00:02:08.085 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:08.085 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:08.085 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:08.085 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:08.085 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:08.343 [250/267] Linking target lib/librte_net.so.24.1 00:02:08.343 [251/267] Linking target lib/librte_cryptodev.so.24.1 00:02:08.343 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:08.343 [253/267] Linking target lib/librte_compressdev.so.24.1 00:02:08.343 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:08.343 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:08.343 [256/267] Linking target lib/librte_hash.so.24.1 00:02:08.343 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:08.343 [258/267] Linking target lib/librte_security.so.24.1 00:02:08.343 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:08.601 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.860 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:08.860 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:08.860 [263/267] Linking target lib/librte_power.so.24.1 00:02:10.239 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:10.239 [265/267] Linking static target lib/librte_vhost.a 00:02:11.175 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.175 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:11.175 INFO: autodetecting backend as ninja 00:02:11.175 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:23.460 CC lib/log/log.o 00:02:23.460 CC lib/log/log_flags.o 00:02:23.460 CC lib/log/log_deprecated.o 00:02:23.460 CC lib/ut/ut.o 00:02:23.460 CC lib/ut_mock/mock.o 00:02:23.460 LIB libspdk_ut.a 00:02:23.460 LIB libspdk_log.a 00:02:23.460 LIB libspdk_ut_mock.a 00:02:23.460 SO libspdk_ut.so.2.0 00:02:23.460 SO libspdk_log.so.7.1 00:02:23.460 SO libspdk_ut_mock.so.6.0 00:02:23.460 SYMLINK libspdk_ut.so 00:02:23.460 SYMLINK libspdk_log.so 00:02:23.460 SYMLINK libspdk_ut_mock.so 00:02:23.460 CC lib/ioat/ioat.o 00:02:23.460 CXX lib/trace_parser/trace.o 00:02:23.460 CC lib/util/base64.o 00:02:23.460 CC lib/util/bit_array.o 00:02:23.460 CC lib/util/crc16.o 00:02:23.460 CC lib/util/crc32.o 00:02:23.460 CC lib/util/cpuset.o 00:02:23.460 CC lib/util/crc32c.o 00:02:23.460 CC lib/dma/dma.o 00:02:23.460 CC lib/vfio_user/host/vfio_user_pci.o 00:02:23.460 CC lib/util/crc32_ieee.o 00:02:23.460 CC lib/util/crc64.o 00:02:23.460 CC lib/util/dif.o 00:02:23.460 CC lib/util/fd.o 00:02:23.460 CC lib/util/fd_group.o 00:02:23.460 LIB libspdk_dma.a 00:02:23.460 CC lib/util/file.o 00:02:23.460 CC lib/util/hexlify.o 00:02:23.460 SO libspdk_dma.so.5.0 00:02:23.460 CC lib/vfio_user/host/vfio_user.o 00:02:23.460 SYMLINK libspdk_dma.so 00:02:23.460 CC lib/util/iov.o 00:02:23.460 LIB libspdk_ioat.a 00:02:23.460 CC lib/util/math.o 00:02:23.460 SO libspdk_ioat.so.7.0 00:02:23.460 CC lib/util/net.o 00:02:23.460 SYMLINK libspdk_ioat.so 00:02:23.460 CC lib/util/pipe.o 00:02:23.460 CC lib/util/strerror_tls.o 00:02:23.460 CC lib/util/string.o 00:02:23.460 CC lib/util/uuid.o 00:02:23.460 CC lib/util/xor.o 00:02:23.460 LIB libspdk_vfio_user.a 00:02:23.460 CC lib/util/zipf.o 00:02:23.460 SO libspdk_vfio_user.so.5.0 00:02:23.460 CC lib/util/md5.o 00:02:23.460 SYMLINK libspdk_vfio_user.so 00:02:23.460 LIB libspdk_util.a 00:02:23.461 LIB libspdk_trace_parser.a 00:02:23.461 SO libspdk_util.so.10.1 00:02:23.461 SO libspdk_trace_parser.so.6.0 00:02:23.461 SYMLINK libspdk_trace_parser.so 00:02:23.461 SYMLINK libspdk_util.so 00:02:23.461 CC lib/vmd/vmd.o 00:02:23.461 CC lib/vmd/led.o 00:02:23.461 CC lib/rdma_utils/rdma_utils.o 00:02:23.461 CC lib/conf/conf.o 00:02:23.461 CC lib/json/json_util.o 00:02:23.461 CC lib/json/json_write.o 00:02:23.461 CC lib/json/json_parse.o 00:02:23.461 CC lib/env_dpdk/env.o 00:02:23.461 CC lib/env_dpdk/memory.o 00:02:23.461 CC lib/idxd/idxd.o 00:02:23.461 CC lib/idxd/idxd_user.o 00:02:23.461 LIB libspdk_conf.a 00:02:23.721 CC lib/idxd/idxd_kernel.o 00:02:23.721 CC lib/env_dpdk/pci.o 00:02:23.721 SO libspdk_conf.so.6.0 00:02:23.721 LIB libspdk_rdma_utils.a 00:02:23.721 SO libspdk_rdma_utils.so.1.0 00:02:23.721 LIB libspdk_json.a 00:02:23.721 SYMLINK libspdk_conf.so 00:02:23.721 CC lib/env_dpdk/init.o 00:02:23.721 SO libspdk_json.so.6.0 00:02:23.721 SYMLINK libspdk_rdma_utils.so 00:02:23.721 CC lib/env_dpdk/threads.o 00:02:23.721 SYMLINK libspdk_json.so 00:02:23.721 CC lib/env_dpdk/pci_ioat.o 00:02:23.721 CC lib/rdma_provider/common.o 00:02:23.721 CC lib/env_dpdk/pci_virtio.o 00:02:23.982 CC lib/env_dpdk/pci_vmd.o 00:02:23.982 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.982 CC lib/env_dpdk/pci_idxd.o 00:02:23.982 CC lib/env_dpdk/pci_event.o 00:02:23.982 CC lib/env_dpdk/sigbus_handler.o 00:02:23.982 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:23.982 CC lib/env_dpdk/pci_dpdk.o 00:02:23.982 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:23.982 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.982 LIB libspdk_idxd.a 00:02:23.982 SO libspdk_idxd.so.12.1 00:02:23.982 LIB libspdk_vmd.a 00:02:23.982 SO libspdk_vmd.so.6.0 00:02:23.982 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.982 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.982 SYMLINK libspdk_idxd.so 00:02:23.982 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:23.982 SYMLINK libspdk_vmd.so 00:02:23.982 LIB libspdk_rdma_provider.a 00:02:24.243 SO libspdk_rdma_provider.so.7.0 00:02:24.243 SYMLINK libspdk_rdma_provider.so 00:02:24.243 LIB libspdk_jsonrpc.a 00:02:24.243 SO libspdk_jsonrpc.so.6.0 00:02:24.504 SYMLINK libspdk_jsonrpc.so 00:02:24.504 CC lib/rpc/rpc.o 00:02:24.763 LIB libspdk_env_dpdk.a 00:02:24.763 LIB libspdk_rpc.a 00:02:24.763 SO libspdk_rpc.so.6.0 00:02:24.763 SO libspdk_env_dpdk.so.15.1 00:02:24.763 SYMLINK libspdk_rpc.so 00:02:25.024 SYMLINK libspdk_env_dpdk.so 00:02:25.024 CC lib/trace/trace.o 00:02:25.024 CC lib/keyring/keyring_rpc.o 00:02:25.024 CC lib/keyring/keyring.o 00:02:25.024 CC lib/trace/trace_flags.o 00:02:25.024 CC lib/trace/trace_rpc.o 00:02:25.024 CC lib/notify/notify.o 00:02:25.024 CC lib/notify/notify_rpc.o 00:02:25.285 LIB libspdk_notify.a 00:02:25.285 SO libspdk_notify.so.6.0 00:02:25.285 LIB libspdk_trace.a 00:02:25.285 SO libspdk_trace.so.11.0 00:02:25.285 SYMLINK libspdk_notify.so 00:02:25.285 LIB libspdk_keyring.a 00:02:25.285 SO libspdk_keyring.so.2.0 00:02:25.285 SYMLINK libspdk_trace.so 00:02:25.285 SYMLINK libspdk_keyring.so 00:02:25.547 CC lib/sock/sock_rpc.o 00:02:25.547 CC lib/sock/sock.o 00:02:25.547 CC lib/thread/thread.o 00:02:25.547 CC lib/thread/iobuf.o 00:02:26.119 LIB libspdk_sock.a 00:02:26.119 SO libspdk_sock.so.10.0 00:02:26.119 SYMLINK libspdk_sock.so 00:02:26.379 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:26.379 CC lib/nvme/nvme_ctrlr.o 00:02:26.379 CC lib/nvme/nvme_fabric.o 00:02:26.379 CC lib/nvme/nvme_ns_cmd.o 00:02:26.379 CC lib/nvme/nvme_pcie_common.o 00:02:26.379 CC lib/nvme/nvme_ns.o 00:02:26.379 CC lib/nvme/nvme_pcie.o 00:02:26.379 CC lib/nvme/nvme_qpair.o 00:02:26.379 CC lib/nvme/nvme.o 00:02:26.640 LIB libspdk_thread.a 00:02:26.902 SO libspdk_thread.so.11.0 00:02:26.902 SYMLINK libspdk_thread.so 00:02:26.902 CC lib/nvme/nvme_quirks.o 00:02:26.902 CC lib/nvme/nvme_transport.o 00:02:26.902 CC lib/nvme/nvme_discovery.o 00:02:26.902 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:27.163 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:27.163 CC lib/nvme/nvme_tcp.o 00:02:27.163 CC lib/nvme/nvme_opal.o 00:02:27.163 CC lib/nvme/nvme_io_msg.o 00:02:27.424 CC lib/accel/accel.o 00:02:27.424 CC lib/nvme/nvme_poll_group.o 00:02:27.424 CC lib/nvme/nvme_zns.o 00:02:27.424 CC lib/nvme/nvme_stubs.o 00:02:27.685 CC lib/nvme/nvme_auth.o 00:02:27.685 CC lib/nvme/nvme_cuse.o 00:02:27.685 CC lib/nvme/nvme_rdma.o 00:02:27.685 CC lib/accel/accel_rpc.o 00:02:28.020 CC lib/accel/accel_sw.o 00:02:28.020 CC lib/blob/blobstore.o 00:02:28.020 CC lib/init/json_config.o 00:02:28.020 CC lib/virtio/virtio.o 00:02:28.280 CC lib/init/subsystem.o 00:02:28.280 CC lib/blob/request.o 00:02:28.541 CC lib/init/subsystem_rpc.o 00:02:28.541 CC lib/blob/zeroes.o 00:02:28.541 CC lib/virtio/virtio_vhost_user.o 00:02:28.541 CC lib/virtio/virtio_vfio_user.o 00:02:28.541 CC lib/init/rpc.o 00:02:28.541 LIB libspdk_accel.a 00:02:28.541 SO libspdk_accel.so.16.0 00:02:28.541 CC lib/virtio/virtio_pci.o 00:02:28.541 CC lib/blob/blob_bs_dev.o 00:02:28.802 SYMLINK libspdk_accel.so 00:02:28.802 LIB libspdk_init.a 00:02:28.802 SO libspdk_init.so.6.0 00:02:28.802 CC lib/fsdev/fsdev.o 00:02:28.802 CC lib/fsdev/fsdev_io.o 00:02:28.802 SYMLINK libspdk_init.so 00:02:28.802 CC lib/fsdev/fsdev_rpc.o 00:02:28.802 CC lib/bdev/bdev.o 00:02:28.802 CC lib/bdev/bdev_rpc.o 00:02:28.802 CC lib/bdev/bdev_zone.o 00:02:28.802 LIB libspdk_virtio.a 00:02:29.061 CC lib/bdev/part.o 00:02:29.061 CC lib/event/app.o 00:02:29.061 SO libspdk_virtio.so.7.0 00:02:29.061 CC lib/bdev/scsi_nvme.o 00:02:29.061 SYMLINK libspdk_virtio.so 00:02:29.061 CC lib/event/reactor.o 00:02:29.061 LIB libspdk_nvme.a 00:02:29.061 CC lib/event/log_rpc.o 00:02:29.061 CC lib/event/app_rpc.o 00:02:29.061 CC lib/event/scheduler_static.o 00:02:29.319 SO libspdk_nvme.so.15.0 00:02:29.579 LIB libspdk_fsdev.a 00:02:29.579 LIB libspdk_event.a 00:02:29.579 SO libspdk_fsdev.so.2.0 00:02:29.579 SO libspdk_event.so.14.0 00:02:29.579 SYMLINK libspdk_nvme.so 00:02:29.579 SYMLINK libspdk_fsdev.so 00:02:29.579 SYMLINK libspdk_event.so 00:02:29.839 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:30.409 LIB libspdk_fuse_dispatcher.a 00:02:30.410 SO libspdk_fuse_dispatcher.so.1.0 00:02:30.410 SYMLINK libspdk_fuse_dispatcher.so 00:02:31.793 LIB libspdk_blob.a 00:02:31.793 SO libspdk_blob.so.12.0 00:02:31.793 SYMLINK libspdk_blob.so 00:02:31.793 LIB libspdk_bdev.a 00:02:31.793 SO libspdk_bdev.so.17.0 00:02:32.054 SYMLINK libspdk_bdev.so 00:02:32.054 CC lib/lvol/lvol.o 00:02:32.054 CC lib/blobfs/blobfs.o 00:02:32.054 CC lib/blobfs/tree.o 00:02:32.054 CC lib/nvmf/ctrlr.o 00:02:32.054 CC lib/ublk/ublk.o 00:02:32.054 CC lib/nvmf/ctrlr_discovery.o 00:02:32.054 CC lib/ublk/ublk_rpc.o 00:02:32.054 CC lib/ftl/ftl_core.o 00:02:32.054 CC lib/scsi/dev.o 00:02:32.054 CC lib/nbd/nbd.o 00:02:32.054 CC lib/nbd/nbd_rpc.o 00:02:32.314 CC lib/nvmf/ctrlr_bdev.o 00:02:32.314 CC lib/nvmf/subsystem.o 00:02:32.314 CC lib/scsi/lun.o 00:02:32.574 CC lib/ftl/ftl_init.o 00:02:32.574 LIB libspdk_nbd.a 00:02:32.574 SO libspdk_nbd.so.7.0 00:02:32.574 CC lib/nvmf/nvmf.o 00:02:32.574 CC lib/scsi/port.o 00:02:32.836 SYMLINK libspdk_nbd.so 00:02:32.836 CC lib/scsi/scsi.o 00:02:32.836 CC lib/ftl/ftl_layout.o 00:02:32.836 LIB libspdk_ublk.a 00:02:32.836 CC lib/scsi/scsi_bdev.o 00:02:32.836 SO libspdk_ublk.so.3.0 00:02:32.836 CC lib/scsi/scsi_pr.o 00:02:32.836 SYMLINK libspdk_ublk.so 00:02:32.836 CC lib/ftl/ftl_debug.o 00:02:32.836 LIB libspdk_blobfs.a 00:02:32.836 SO libspdk_blobfs.so.11.0 00:02:33.097 LIB libspdk_lvol.a 00:02:33.097 SYMLINK libspdk_blobfs.so 00:02:33.097 SO libspdk_lvol.so.11.0 00:02:33.098 CC lib/scsi/scsi_rpc.o 00:02:33.098 CC lib/nvmf/nvmf_rpc.o 00:02:33.098 CC lib/scsi/task.o 00:02:33.098 SYMLINK libspdk_lvol.so 00:02:33.098 CC lib/ftl/ftl_io.o 00:02:33.098 CC lib/ftl/ftl_sb.o 00:02:33.098 CC lib/ftl/ftl_l2p.o 00:02:33.098 CC lib/ftl/ftl_l2p_flat.o 00:02:33.359 CC lib/ftl/ftl_nv_cache.o 00:02:33.359 CC lib/ftl/ftl_band.o 00:02:33.359 LIB libspdk_scsi.a 00:02:33.359 CC lib/ftl/ftl_band_ops.o 00:02:33.359 CC lib/ftl/ftl_writer.o 00:02:33.359 CC lib/ftl/ftl_rq.o 00:02:33.359 SO libspdk_scsi.so.9.0 00:02:33.620 SYMLINK libspdk_scsi.so 00:02:33.620 CC lib/nvmf/transport.o 00:02:33.620 CC lib/ftl/ftl_reloc.o 00:02:33.620 CC lib/nvmf/tcp.o 00:02:33.620 CC lib/nvmf/stubs.o 00:02:33.620 CC lib/ftl/ftl_l2p_cache.o 00:02:33.881 CC lib/iscsi/conn.o 00:02:33.881 CC lib/vhost/vhost.o 00:02:33.881 CC lib/vhost/vhost_rpc.o 00:02:33.881 CC lib/vhost/vhost_scsi.o 00:02:34.142 CC lib/vhost/vhost_blk.o 00:02:34.142 CC lib/vhost/rte_vhost_user.o 00:02:34.404 CC lib/iscsi/init_grp.o 00:02:34.404 CC lib/ftl/ftl_p2l.o 00:02:34.404 CC lib/ftl/ftl_p2l_log.o 00:02:34.666 CC lib/iscsi/iscsi.o 00:02:34.666 CC lib/iscsi/param.o 00:02:34.666 CC lib/iscsi/portal_grp.o 00:02:34.666 CC lib/ftl/mngt/ftl_mngt.o 00:02:34.666 CC lib/nvmf/mdns_server.o 00:02:34.927 CC lib/nvmf/rdma.o 00:02:34.927 CC lib/nvmf/auth.o 00:02:34.927 CC lib/iscsi/tgt_node.o 00:02:34.927 CC lib/iscsi/iscsi_subsystem.o 00:02:35.188 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:35.188 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:35.188 CC lib/iscsi/iscsi_rpc.o 00:02:35.188 LIB libspdk_vhost.a 00:02:35.188 SO libspdk_vhost.so.8.0 00:02:35.449 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:35.449 CC lib/iscsi/task.o 00:02:35.449 SYMLINK libspdk_vhost.so 00:02:35.449 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:35.449 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:35.449 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:35.449 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:35.449 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:35.449 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:35.707 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:35.707 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:35.707 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:35.707 CC lib/ftl/utils/ftl_conf.o 00:02:35.707 CC lib/ftl/utils/ftl_md.o 00:02:35.707 CC lib/ftl/utils/ftl_mempool.o 00:02:35.983 CC lib/ftl/utils/ftl_bitmap.o 00:02:35.983 CC lib/ftl/utils/ftl_property.o 00:02:35.983 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:35.984 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:35.984 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:35.984 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:35.984 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:35.984 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:35.984 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:36.254 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:36.254 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:36.254 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:36.254 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:36.254 LIB libspdk_iscsi.a 00:02:36.254 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:36.254 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:36.254 SO libspdk_iscsi.so.8.0 00:02:36.254 CC lib/ftl/base/ftl_base_dev.o 00:02:36.254 CC lib/ftl/base/ftl_base_bdev.o 00:02:36.254 CC lib/ftl/ftl_trace.o 00:02:36.514 SYMLINK libspdk_iscsi.so 00:02:36.514 LIB libspdk_ftl.a 00:02:36.774 SO libspdk_ftl.so.9.0 00:02:37.035 SYMLINK libspdk_ftl.so 00:02:37.296 LIB libspdk_nvmf.a 00:02:37.557 SO libspdk_nvmf.so.20.0 00:02:37.818 SYMLINK libspdk_nvmf.so 00:02:38.077 CC module/env_dpdk/env_dpdk_rpc.o 00:02:38.339 CC module/blob/bdev/blob_bdev.o 00:02:38.339 CC module/scheduler/gscheduler/gscheduler.o 00:02:38.339 CC module/fsdev/aio/fsdev_aio.o 00:02:38.339 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:38.339 CC module/keyring/linux/keyring.o 00:02:38.339 CC module/keyring/file/keyring.o 00:02:38.339 CC module/sock/posix/posix.o 00:02:38.339 CC module/accel/error/accel_error.o 00:02:38.339 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:38.339 LIB libspdk_env_dpdk_rpc.a 00:02:38.339 SO libspdk_env_dpdk_rpc.so.6.0 00:02:38.339 SYMLINK libspdk_env_dpdk_rpc.so 00:02:38.339 CC module/accel/error/accel_error_rpc.o 00:02:38.339 LIB libspdk_scheduler_gscheduler.a 00:02:38.339 CC module/keyring/file/keyring_rpc.o 00:02:38.339 CC module/keyring/linux/keyring_rpc.o 00:02:38.339 LIB libspdk_scheduler_dpdk_governor.a 00:02:38.339 SO libspdk_scheduler_gscheduler.so.4.0 00:02:38.339 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:38.339 LIB libspdk_scheduler_dynamic.a 00:02:38.339 SO libspdk_scheduler_dynamic.so.4.0 00:02:38.339 SYMLINK libspdk_scheduler_gscheduler.so 00:02:38.339 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:38.339 CC module/fsdev/aio/linux_aio_mgr.o 00:02:38.601 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:38.601 LIB libspdk_blob_bdev.a 00:02:38.601 LIB libspdk_keyring_linux.a 00:02:38.601 LIB libspdk_keyring_file.a 00:02:38.601 LIB libspdk_accel_error.a 00:02:38.601 SYMLINK libspdk_scheduler_dynamic.so 00:02:38.601 SO libspdk_blob_bdev.so.12.0 00:02:38.601 SO libspdk_keyring_linux.so.1.0 00:02:38.601 SO libspdk_keyring_file.so.2.0 00:02:38.601 SO libspdk_accel_error.so.2.0 00:02:38.601 SYMLINK libspdk_blob_bdev.so 00:02:38.601 SYMLINK libspdk_keyring_linux.so 00:02:38.601 SYMLINK libspdk_keyring_file.so 00:02:38.601 SYMLINK libspdk_accel_error.so 00:02:38.601 CC module/accel/ioat/accel_ioat.o 00:02:38.601 CC module/accel/ioat/accel_ioat_rpc.o 00:02:38.601 CC module/accel/dsa/accel_dsa.o 00:02:38.862 CC module/accel/dsa/accel_dsa_rpc.o 00:02:38.862 CC module/accel/iaa/accel_iaa.o 00:02:38.862 LIB libspdk_accel_ioat.a 00:02:38.862 CC module/bdev/delay/vbdev_delay.o 00:02:38.862 CC module/bdev/error/vbdev_error.o 00:02:38.862 CC module/bdev/gpt/gpt.o 00:02:38.862 CC module/blobfs/bdev/blobfs_bdev.o 00:02:38.862 SO libspdk_accel_ioat.so.6.0 00:02:38.862 CC module/bdev/gpt/vbdev_gpt.o 00:02:38.862 SYMLINK libspdk_accel_ioat.so 00:02:38.862 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:38.862 LIB libspdk_fsdev_aio.a 00:02:39.122 LIB libspdk_accel_dsa.a 00:02:39.122 CC module/accel/iaa/accel_iaa_rpc.o 00:02:39.122 SO libspdk_fsdev_aio.so.1.0 00:02:39.122 SO libspdk_accel_dsa.so.5.0 00:02:39.122 LIB libspdk_sock_posix.a 00:02:39.122 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:39.122 SYMLINK libspdk_fsdev_aio.so 00:02:39.122 SO libspdk_sock_posix.so.6.0 00:02:39.122 SYMLINK libspdk_accel_dsa.so 00:02:39.122 CC module/bdev/error/vbdev_error_rpc.o 00:02:39.122 LIB libspdk_accel_iaa.a 00:02:39.122 SYMLINK libspdk_sock_posix.so 00:02:39.122 SO libspdk_accel_iaa.so.3.0 00:02:39.122 LIB libspdk_bdev_gpt.a 00:02:39.384 LIB libspdk_bdev_delay.a 00:02:39.384 SYMLINK libspdk_accel_iaa.so 00:02:39.384 SO libspdk_bdev_gpt.so.6.0 00:02:39.384 LIB libspdk_blobfs_bdev.a 00:02:39.384 SO libspdk_bdev_delay.so.6.0 00:02:39.384 CC module/bdev/lvol/vbdev_lvol.o 00:02:39.384 LIB libspdk_bdev_error.a 00:02:39.384 CC module/bdev/malloc/bdev_malloc.o 00:02:39.384 SO libspdk_blobfs_bdev.so.6.0 00:02:39.384 CC module/bdev/null/bdev_null.o 00:02:39.384 CC module/bdev/nvme/bdev_nvme.o 00:02:39.384 SO libspdk_bdev_error.so.6.0 00:02:39.384 SYMLINK libspdk_bdev_gpt.so 00:02:39.384 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:39.384 CC module/bdev/passthru/vbdev_passthru.o 00:02:39.384 SYMLINK libspdk_bdev_delay.so 00:02:39.384 CC module/bdev/nvme/nvme_rpc.o 00:02:39.384 SYMLINK libspdk_blobfs_bdev.so 00:02:39.384 CC module/bdev/nvme/bdev_mdns_client.o 00:02:39.384 SYMLINK libspdk_bdev_error.so 00:02:39.384 CC module/bdev/nvme/vbdev_opal.o 00:02:39.384 CC module/bdev/raid/bdev_raid.o 00:02:39.705 CC module/bdev/raid/bdev_raid_rpc.o 00:02:39.705 CC module/bdev/null/bdev_null_rpc.o 00:02:39.705 CC module/bdev/raid/bdev_raid_sb.o 00:02:39.705 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:39.705 CC module/bdev/raid/raid0.o 00:02:39.705 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:39.705 LIB libspdk_bdev_null.a 00:02:39.705 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:39.705 SO libspdk_bdev_null.so.6.0 00:02:39.966 LIB libspdk_bdev_passthru.a 00:02:39.966 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:39.966 SO libspdk_bdev_passthru.so.6.0 00:02:39.966 SYMLINK libspdk_bdev_null.so 00:02:39.966 LIB libspdk_bdev_malloc.a 00:02:39.966 SO libspdk_bdev_malloc.so.6.0 00:02:39.966 CC module/bdev/raid/raid1.o 00:02:39.966 SYMLINK libspdk_bdev_passthru.so 00:02:39.966 CC module/bdev/raid/concat.o 00:02:39.966 SYMLINK libspdk_bdev_malloc.so 00:02:39.966 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:39.966 CC module/bdev/split/vbdev_split.o 00:02:40.228 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:40.228 CC module/bdev/xnvme/bdev_xnvme.o 00:02:40.228 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:40.228 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:40.228 CC module/bdev/aio/bdev_aio.o 00:02:40.228 LIB libspdk_bdev_lvol.a 00:02:40.228 CC module/bdev/split/vbdev_split_rpc.o 00:02:40.228 SO libspdk_bdev_lvol.so.6.0 00:02:40.489 SYMLINK libspdk_bdev_lvol.so 00:02:40.489 CC module/bdev/ftl/bdev_ftl.o 00:02:40.489 CC module/bdev/aio/bdev_aio_rpc.o 00:02:40.489 LIB libspdk_bdev_split.a 00:02:40.489 LIB libspdk_bdev_xnvme.a 00:02:40.489 LIB libspdk_bdev_zone_block.a 00:02:40.489 SO libspdk_bdev_split.so.6.0 00:02:40.489 SO libspdk_bdev_xnvme.so.3.0 00:02:40.489 SO libspdk_bdev_zone_block.so.6.0 00:02:40.489 SYMLINK libspdk_bdev_split.so 00:02:40.489 SYMLINK libspdk_bdev_xnvme.so 00:02:40.489 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:40.489 CC module/bdev/iscsi/bdev_iscsi.o 00:02:40.489 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:40.750 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:40.750 SYMLINK libspdk_bdev_zone_block.so 00:02:40.750 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:40.750 LIB libspdk_bdev_aio.a 00:02:40.750 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:40.750 LIB libspdk_bdev_raid.a 00:02:40.750 SO libspdk_bdev_aio.so.6.0 00:02:40.750 SO libspdk_bdev_raid.so.6.0 00:02:40.750 SYMLINK libspdk_bdev_aio.so 00:02:40.750 SYMLINK libspdk_bdev_raid.so 00:02:40.750 LIB libspdk_bdev_ftl.a 00:02:40.750 SO libspdk_bdev_ftl.so.6.0 00:02:41.010 SYMLINK libspdk_bdev_ftl.so 00:02:41.010 LIB libspdk_bdev_iscsi.a 00:02:41.010 SO libspdk_bdev_iscsi.so.6.0 00:02:41.010 SYMLINK libspdk_bdev_iscsi.so 00:02:41.391 LIB libspdk_bdev_virtio.a 00:02:41.391 SO libspdk_bdev_virtio.so.6.0 00:02:41.391 SYMLINK libspdk_bdev_virtio.so 00:02:42.334 LIB libspdk_bdev_nvme.a 00:02:42.334 SO libspdk_bdev_nvme.so.7.1 00:02:42.603 SYMLINK libspdk_bdev_nvme.so 00:02:42.862 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:42.862 CC module/event/subsystems/vmd/vmd.o 00:02:42.862 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:42.862 CC module/event/subsystems/fsdev/fsdev.o 00:02:42.862 CC module/event/subsystems/sock/sock.o 00:02:42.862 CC module/event/subsystems/iobuf/iobuf.o 00:02:42.862 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:42.862 CC module/event/subsystems/keyring/keyring.o 00:02:42.862 CC module/event/subsystems/scheduler/scheduler.o 00:02:43.122 LIB libspdk_event_fsdev.a 00:02:43.122 LIB libspdk_event_keyring.a 00:02:43.122 LIB libspdk_event_sock.a 00:02:43.122 LIB libspdk_event_vhost_blk.a 00:02:43.122 LIB libspdk_event_vmd.a 00:02:43.122 LIB libspdk_event_scheduler.a 00:02:43.122 LIB libspdk_event_iobuf.a 00:02:43.122 SO libspdk_event_vhost_blk.so.3.0 00:02:43.122 SO libspdk_event_keyring.so.1.0 00:02:43.122 SO libspdk_event_sock.so.5.0 00:02:43.122 SO libspdk_event_fsdev.so.1.0 00:02:43.122 SO libspdk_event_scheduler.so.4.0 00:02:43.122 SO libspdk_event_vmd.so.6.0 00:02:43.122 SO libspdk_event_iobuf.so.3.0 00:02:43.122 SYMLINK libspdk_event_sock.so 00:02:43.122 SYMLINK libspdk_event_vhost_blk.so 00:02:43.122 SYMLINK libspdk_event_fsdev.so 00:02:43.122 SYMLINK libspdk_event_keyring.so 00:02:43.122 SYMLINK libspdk_event_scheduler.so 00:02:43.122 SYMLINK libspdk_event_vmd.so 00:02:43.122 SYMLINK libspdk_event_iobuf.so 00:02:43.381 CC module/event/subsystems/accel/accel.o 00:02:43.381 LIB libspdk_event_accel.a 00:02:43.641 SO libspdk_event_accel.so.6.0 00:02:43.641 SYMLINK libspdk_event_accel.so 00:02:43.900 CC module/event/subsystems/bdev/bdev.o 00:02:43.900 LIB libspdk_event_bdev.a 00:02:44.179 SO libspdk_event_bdev.so.6.0 00:02:44.179 SYMLINK libspdk_event_bdev.so 00:02:44.451 CC module/event/subsystems/nbd/nbd.o 00:02:44.451 CC module/event/subsystems/ublk/ublk.o 00:02:44.451 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:44.451 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:44.451 CC module/event/subsystems/scsi/scsi.o 00:02:44.451 LIB libspdk_event_nbd.a 00:02:44.451 LIB libspdk_event_ublk.a 00:02:44.451 LIB libspdk_event_scsi.a 00:02:44.451 SO libspdk_event_nbd.so.6.0 00:02:44.451 SO libspdk_event_ublk.so.3.0 00:02:44.451 SO libspdk_event_scsi.so.6.0 00:02:44.451 SYMLINK libspdk_event_nbd.so 00:02:44.451 SYMLINK libspdk_event_ublk.so 00:02:44.451 SYMLINK libspdk_event_scsi.so 00:02:44.451 LIB libspdk_event_nvmf.a 00:02:44.713 SO libspdk_event_nvmf.so.6.0 00:02:44.713 SYMLINK libspdk_event_nvmf.so 00:02:44.713 CC module/event/subsystems/iscsi/iscsi.o 00:02:44.713 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:44.975 LIB libspdk_event_vhost_scsi.a 00:02:44.975 LIB libspdk_event_iscsi.a 00:02:44.975 SO libspdk_event_vhost_scsi.so.3.0 00:02:44.975 SO libspdk_event_iscsi.so.6.0 00:02:44.975 SYMLINK libspdk_event_vhost_scsi.so 00:02:44.975 SYMLINK libspdk_event_iscsi.so 00:02:45.236 SO libspdk.so.6.0 00:02:45.236 SYMLINK libspdk.so 00:02:45.236 CC app/spdk_nvme_perf/perf.o 00:02:45.236 CC app/spdk_lspci/spdk_lspci.o 00:02:45.236 CC app/trace_record/trace_record.o 00:02:45.495 CXX app/trace/trace.o 00:02:45.495 CC app/nvmf_tgt/nvmf_main.o 00:02:45.495 CC app/iscsi_tgt/iscsi_tgt.o 00:02:45.495 CC app/spdk_tgt/spdk_tgt.o 00:02:45.496 CC examples/util/zipf/zipf.o 00:02:45.496 CC examples/ioat/perf/perf.o 00:02:45.496 CC test/thread/poller_perf/poller_perf.o 00:02:45.496 LINK spdk_lspci 00:02:45.496 LINK nvmf_tgt 00:02:45.496 LINK iscsi_tgt 00:02:45.496 LINK poller_perf 00:02:45.756 LINK zipf 00:02:45.756 LINK spdk_tgt 00:02:45.756 LINK spdk_trace_record 00:02:45.756 LINK ioat_perf 00:02:45.756 CC app/spdk_nvme_identify/identify.o 00:02:45.756 LINK spdk_trace 00:02:45.756 CC app/spdk_nvme_discover/discovery_aer.o 00:02:46.019 CC examples/ioat/verify/verify.o 00:02:46.019 TEST_HEADER include/spdk/accel.h 00:02:46.019 TEST_HEADER include/spdk/accel_module.h 00:02:46.019 TEST_HEADER include/spdk/assert.h 00:02:46.019 TEST_HEADER include/spdk/barrier.h 00:02:46.019 TEST_HEADER include/spdk/base64.h 00:02:46.019 TEST_HEADER include/spdk/bdev.h 00:02:46.019 TEST_HEADER include/spdk/bdev_module.h 00:02:46.019 TEST_HEADER include/spdk/bdev_zone.h 00:02:46.019 TEST_HEADER include/spdk/bit_array.h 00:02:46.019 TEST_HEADER include/spdk/bit_pool.h 00:02:46.019 TEST_HEADER include/spdk/blob_bdev.h 00:02:46.019 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:46.019 TEST_HEADER include/spdk/blobfs.h 00:02:46.019 TEST_HEADER include/spdk/blob.h 00:02:46.019 TEST_HEADER include/spdk/conf.h 00:02:46.019 TEST_HEADER include/spdk/config.h 00:02:46.019 TEST_HEADER include/spdk/cpuset.h 00:02:46.019 TEST_HEADER include/spdk/crc16.h 00:02:46.019 TEST_HEADER include/spdk/crc32.h 00:02:46.019 TEST_HEADER include/spdk/crc64.h 00:02:46.019 TEST_HEADER include/spdk/dif.h 00:02:46.019 TEST_HEADER include/spdk/dma.h 00:02:46.019 TEST_HEADER include/spdk/endian.h 00:02:46.019 TEST_HEADER include/spdk/env_dpdk.h 00:02:46.019 TEST_HEADER include/spdk/env.h 00:02:46.019 TEST_HEADER include/spdk/event.h 00:02:46.019 CC app/spdk_top/spdk_top.o 00:02:46.019 TEST_HEADER include/spdk/fd_group.h 00:02:46.019 TEST_HEADER include/spdk/fd.h 00:02:46.019 TEST_HEADER include/spdk/file.h 00:02:46.019 TEST_HEADER include/spdk/fsdev.h 00:02:46.019 TEST_HEADER include/spdk/fsdev_module.h 00:02:46.019 TEST_HEADER include/spdk/ftl.h 00:02:46.019 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:46.019 TEST_HEADER include/spdk/gpt_spec.h 00:02:46.019 TEST_HEADER include/spdk/hexlify.h 00:02:46.019 TEST_HEADER include/spdk/histogram_data.h 00:02:46.019 TEST_HEADER include/spdk/idxd.h 00:02:46.019 TEST_HEADER include/spdk/idxd_spec.h 00:02:46.019 TEST_HEADER include/spdk/init.h 00:02:46.019 TEST_HEADER include/spdk/ioat.h 00:02:46.019 CC test/dma/test_dma/test_dma.o 00:02:46.019 TEST_HEADER include/spdk/ioat_spec.h 00:02:46.019 TEST_HEADER include/spdk/iscsi_spec.h 00:02:46.019 TEST_HEADER include/spdk/json.h 00:02:46.019 TEST_HEADER include/spdk/jsonrpc.h 00:02:46.019 TEST_HEADER include/spdk/keyring.h 00:02:46.019 TEST_HEADER include/spdk/keyring_module.h 00:02:46.019 TEST_HEADER include/spdk/likely.h 00:02:46.019 CC test/app/bdev_svc/bdev_svc.o 00:02:46.019 TEST_HEADER include/spdk/log.h 00:02:46.019 TEST_HEADER include/spdk/lvol.h 00:02:46.019 TEST_HEADER include/spdk/md5.h 00:02:46.019 TEST_HEADER include/spdk/memory.h 00:02:46.019 TEST_HEADER include/spdk/mmio.h 00:02:46.019 TEST_HEADER include/spdk/nbd.h 00:02:46.019 TEST_HEADER include/spdk/net.h 00:02:46.019 TEST_HEADER include/spdk/notify.h 00:02:46.019 TEST_HEADER include/spdk/nvme.h 00:02:46.019 TEST_HEADER include/spdk/nvme_intel.h 00:02:46.019 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:46.019 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:46.019 TEST_HEADER include/spdk/nvme_spec.h 00:02:46.019 TEST_HEADER include/spdk/nvme_zns.h 00:02:46.019 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:46.019 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:46.019 TEST_HEADER include/spdk/nvmf.h 00:02:46.019 TEST_HEADER include/spdk/nvmf_spec.h 00:02:46.019 TEST_HEADER include/spdk/nvmf_transport.h 00:02:46.019 TEST_HEADER include/spdk/opal.h 00:02:46.019 TEST_HEADER include/spdk/opal_spec.h 00:02:46.019 TEST_HEADER include/spdk/pci_ids.h 00:02:46.019 TEST_HEADER include/spdk/pipe.h 00:02:46.019 TEST_HEADER include/spdk/queue.h 00:02:46.019 TEST_HEADER include/spdk/reduce.h 00:02:46.019 TEST_HEADER include/spdk/rpc.h 00:02:46.019 TEST_HEADER include/spdk/scheduler.h 00:02:46.019 TEST_HEADER include/spdk/scsi.h 00:02:46.019 TEST_HEADER include/spdk/scsi_spec.h 00:02:46.019 TEST_HEADER include/spdk/sock.h 00:02:46.019 TEST_HEADER include/spdk/stdinc.h 00:02:46.019 TEST_HEADER include/spdk/string.h 00:02:46.019 TEST_HEADER include/spdk/thread.h 00:02:46.019 TEST_HEADER include/spdk/trace.h 00:02:46.019 TEST_HEADER include/spdk/trace_parser.h 00:02:46.019 TEST_HEADER include/spdk/tree.h 00:02:46.019 TEST_HEADER include/spdk/ublk.h 00:02:46.019 TEST_HEADER include/spdk/util.h 00:02:46.019 TEST_HEADER include/spdk/uuid.h 00:02:46.019 TEST_HEADER include/spdk/version.h 00:02:46.019 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:46.019 LINK spdk_nvme_discover 00:02:46.019 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:46.019 TEST_HEADER include/spdk/vhost.h 00:02:46.019 TEST_HEADER include/spdk/vmd.h 00:02:46.019 TEST_HEADER include/spdk/xor.h 00:02:46.019 TEST_HEADER include/spdk/zipf.h 00:02:46.019 CXX test/cpp_headers/accel.o 00:02:46.019 CC test/env/mem_callbacks/mem_callbacks.o 00:02:46.019 LINK verify 00:02:46.019 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:46.279 LINK bdev_svc 00:02:46.279 CXX test/cpp_headers/accel_module.o 00:02:46.279 LINK spdk_nvme_perf 00:02:46.279 CC test/app/histogram_perf/histogram_perf.o 00:02:46.279 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:46.538 CXX test/cpp_headers/assert.o 00:02:46.538 CC test/env/vtophys/vtophys.o 00:02:46.538 LINK histogram_perf 00:02:46.538 LINK test_dma 00:02:46.538 CXX test/cpp_headers/barrier.o 00:02:46.538 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:46.538 LINK interrupt_tgt 00:02:46.538 LINK nvme_fuzz 00:02:46.797 LINK vtophys 00:02:46.797 LINK mem_callbacks 00:02:46.797 LINK spdk_nvme_identify 00:02:46.797 CXX test/cpp_headers/base64.o 00:02:46.797 LINK env_dpdk_post_init 00:02:46.797 CC app/spdk_dd/spdk_dd.o 00:02:46.797 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.057 CXX test/cpp_headers/bdev.o 00:02:47.057 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:47.057 CC examples/sock/hello_world/hello_sock.o 00:02:47.057 CC examples/vmd/lsvmd/lsvmd.o 00:02:47.057 CC examples/thread/thread/thread_ex.o 00:02:47.057 CC app/fio/nvme/fio_plugin.o 00:02:47.057 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:47.057 CC test/env/memory/memory_ut.o 00:02:47.057 LINK spdk_top 00:02:47.057 CXX test/cpp_headers/bdev_module.o 00:02:47.057 LINK lsvmd 00:02:47.317 LINK hello_sock 00:02:47.317 LINK spdk_dd 00:02:47.317 LINK thread 00:02:47.317 CXX test/cpp_headers/bdev_zone.o 00:02:47.317 CC examples/vmd/led/led.o 00:02:47.580 CXX test/cpp_headers/bit_array.o 00:02:47.580 CC test/app/jsoncat/jsoncat.o 00:02:47.580 LINK led 00:02:47.580 CC examples/idxd/perf/perf.o 00:02:47.580 LINK vhost_fuzz 00:02:47.841 CC test/event/event_perf/event_perf.o 00:02:47.841 LINK jsoncat 00:02:47.841 CXX test/cpp_headers/bit_pool.o 00:02:47.841 CC examples/nvme/hello_world/hello_world.o 00:02:47.841 LINK spdk_nvme 00:02:47.841 CC test/event/reactor/reactor.o 00:02:47.841 CC test/event/reactor_perf/reactor_perf.o 00:02:47.841 LINK event_perf 00:02:47.841 CXX test/cpp_headers/blob_bdev.o 00:02:47.841 CC test/event/app_repeat/app_repeat.o 00:02:48.103 LINK reactor 00:02:48.103 LINK reactor_perf 00:02:48.103 LINK hello_world 00:02:48.103 LINK idxd_perf 00:02:48.103 CC app/fio/bdev/fio_plugin.o 00:02:48.103 CXX test/cpp_headers/blobfs_bdev.o 00:02:48.103 LINK app_repeat 00:02:48.363 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:48.363 CXX test/cpp_headers/blobfs.o 00:02:48.363 CC examples/nvme/reconnect/reconnect.o 00:02:48.363 CC app/vhost/vhost.o 00:02:48.363 LINK memory_ut 00:02:48.363 CC examples/accel/perf/accel_perf.o 00:02:48.363 CC test/nvme/aer/aer.o 00:02:48.363 CXX test/cpp_headers/blob.o 00:02:48.363 CC test/event/scheduler/scheduler.o 00:02:48.363 LINK vhost 00:02:48.622 LINK hello_fsdev 00:02:48.622 LINK spdk_bdev 00:02:48.622 CXX test/cpp_headers/conf.o 00:02:48.622 CC test/env/pci/pci_ut.o 00:02:48.622 LINK reconnect 00:02:48.622 LINK scheduler 00:02:48.622 LINK aer 00:02:48.622 CXX test/cpp_headers/config.o 00:02:48.880 CXX test/cpp_headers/cpuset.o 00:02:48.880 CC test/nvme/reset/reset.o 00:02:48.880 CC test/nvme/sgl/sgl.o 00:02:48.880 CC test/nvme/e2edp/nvme_dp.o 00:02:48.880 CXX test/cpp_headers/crc16.o 00:02:48.880 LINK iscsi_fuzz 00:02:48.880 CXX test/cpp_headers/crc32.o 00:02:48.880 LINK accel_perf 00:02:48.880 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:49.141 CC test/app/stub/stub.o 00:02:49.141 CXX test/cpp_headers/crc64.o 00:02:49.141 CXX test/cpp_headers/dif.o 00:02:49.141 LINK reset 00:02:49.141 LINK sgl 00:02:49.141 CXX test/cpp_headers/dma.o 00:02:49.141 LINK pci_ut 00:02:49.141 CXX test/cpp_headers/endian.o 00:02:49.141 LINK nvme_dp 00:02:49.141 LINK stub 00:02:49.402 CXX test/cpp_headers/env_dpdk.o 00:02:49.402 CC test/nvme/overhead/overhead.o 00:02:49.402 CC test/nvme/err_injection/err_injection.o 00:02:49.402 CC test/nvme/startup/startup.o 00:02:49.402 CC test/nvme/reserve/reserve.o 00:02:49.402 CC test/rpc_client/rpc_client_test.o 00:02:49.402 CXX test/cpp_headers/env.o 00:02:49.402 CC examples/blob/hello_world/hello_blob.o 00:02:49.402 LINK nvme_manage 00:02:49.402 LINK err_injection 00:02:49.662 LINK startup 00:02:49.662 LINK reserve 00:02:49.662 CC test/accel/dif/dif.o 00:02:49.662 LINK overhead 00:02:49.662 CXX test/cpp_headers/event.o 00:02:49.662 CC examples/bdev/hello_world/hello_bdev.o 00:02:49.662 LINK rpc_client_test 00:02:49.662 LINK hello_blob 00:02:49.923 CXX test/cpp_headers/fd_group.o 00:02:49.923 CXX test/cpp_headers/fd.o 00:02:49.923 CC test/nvme/simple_copy/simple_copy.o 00:02:49.923 CC examples/nvme/arbitration/arbitration.o 00:02:49.923 CC test/nvme/connect_stress/connect_stress.o 00:02:49.923 CC examples/nvme/hotplug/hotplug.o 00:02:49.923 LINK hello_bdev 00:02:49.923 CC examples/blob/cli/blobcli.o 00:02:49.923 CXX test/cpp_headers/file.o 00:02:49.923 LINK connect_stress 00:02:49.923 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:50.184 LINK simple_copy 00:02:50.184 CXX test/cpp_headers/fsdev.o 00:02:50.184 LINK hotplug 00:02:50.184 CC test/blobfs/mkfs/mkfs.o 00:02:50.184 LINK arbitration 00:02:50.184 CC examples/bdev/bdevperf/bdevperf.o 00:02:50.184 LINK cmb_copy 00:02:50.184 CXX test/cpp_headers/fsdev_module.o 00:02:50.184 CC test/nvme/boot_partition/boot_partition.o 00:02:50.444 CC test/nvme/compliance/nvme_compliance.o 00:02:50.444 LINK mkfs 00:02:50.444 LINK dif 00:02:50.444 CC examples/nvme/abort/abort.o 00:02:50.444 CXX test/cpp_headers/ftl.o 00:02:50.444 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:50.444 LINK boot_partition 00:02:50.444 LINK blobcli 00:02:50.708 CXX test/cpp_headers/fuse_dispatcher.o 00:02:50.708 CXX test/cpp_headers/gpt_spec.o 00:02:50.708 LINK pmr_persistence 00:02:50.708 CC test/lvol/esnap/esnap.o 00:02:50.708 CC test/nvme/fused_ordering/fused_ordering.o 00:02:50.708 LINK nvme_compliance 00:02:50.708 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:50.708 CXX test/cpp_headers/hexlify.o 00:02:50.708 CC test/nvme/fdp/fdp.o 00:02:50.708 CXX test/cpp_headers/histogram_data.o 00:02:50.968 CXX test/cpp_headers/idxd.o 00:02:50.968 LINK abort 00:02:50.968 LINK fused_ordering 00:02:50.968 CXX test/cpp_headers/idxd_spec.o 00:02:50.968 LINK doorbell_aers 00:02:50.968 CXX test/cpp_headers/init.o 00:02:50.968 CC test/nvme/cuse/cuse.o 00:02:50.968 CXX test/cpp_headers/ioat.o 00:02:50.968 CXX test/cpp_headers/ioat_spec.o 00:02:51.228 CXX test/cpp_headers/iscsi_spec.o 00:02:51.228 CXX test/cpp_headers/json.o 00:02:51.228 CXX test/cpp_headers/jsonrpc.o 00:02:51.228 LINK bdevperf 00:02:51.228 LINK fdp 00:02:51.228 CXX test/cpp_headers/keyring.o 00:02:51.228 CC test/bdev/bdevio/bdevio.o 00:02:51.228 CXX test/cpp_headers/keyring_module.o 00:02:51.228 CXX test/cpp_headers/likely.o 00:02:51.228 CXX test/cpp_headers/log.o 00:02:51.228 CXX test/cpp_headers/lvol.o 00:02:51.228 CXX test/cpp_headers/md5.o 00:02:51.486 CXX test/cpp_headers/memory.o 00:02:51.486 CXX test/cpp_headers/mmio.o 00:02:51.486 CXX test/cpp_headers/nbd.o 00:02:51.486 CXX test/cpp_headers/net.o 00:02:51.486 CXX test/cpp_headers/notify.o 00:02:51.486 CXX test/cpp_headers/nvme.o 00:02:51.486 CXX test/cpp_headers/nvme_intel.o 00:02:51.486 CXX test/cpp_headers/nvme_ocssd.o 00:02:51.486 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:51.486 CXX test/cpp_headers/nvme_spec.o 00:02:51.486 CXX test/cpp_headers/nvme_zns.o 00:02:51.486 CC examples/nvmf/nvmf/nvmf.o 00:02:51.743 CXX test/cpp_headers/nvmf_cmd.o 00:02:51.743 LINK bdevio 00:02:51.743 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:51.743 CXX test/cpp_headers/nvmf.o 00:02:51.743 CXX test/cpp_headers/nvmf_spec.o 00:02:51.743 CXX test/cpp_headers/nvmf_transport.o 00:02:51.743 CXX test/cpp_headers/opal.o 00:02:51.743 CXX test/cpp_headers/opal_spec.o 00:02:51.743 CXX test/cpp_headers/pci_ids.o 00:02:51.743 CXX test/cpp_headers/pipe.o 00:02:52.002 CXX test/cpp_headers/queue.o 00:02:52.002 CXX test/cpp_headers/reduce.o 00:02:52.002 LINK nvmf 00:02:52.002 CXX test/cpp_headers/rpc.o 00:02:52.002 CXX test/cpp_headers/scheduler.o 00:02:52.002 CXX test/cpp_headers/scsi.o 00:02:52.002 CXX test/cpp_headers/scsi_spec.o 00:02:52.002 CXX test/cpp_headers/sock.o 00:02:52.002 CXX test/cpp_headers/stdinc.o 00:02:52.002 CXX test/cpp_headers/string.o 00:02:52.002 CXX test/cpp_headers/thread.o 00:02:52.002 CXX test/cpp_headers/trace.o 00:02:52.002 CXX test/cpp_headers/trace_parser.o 00:02:52.002 CXX test/cpp_headers/tree.o 00:02:52.260 CXX test/cpp_headers/ublk.o 00:02:52.260 CXX test/cpp_headers/util.o 00:02:52.260 CXX test/cpp_headers/uuid.o 00:02:52.260 CXX test/cpp_headers/version.o 00:02:52.260 CXX test/cpp_headers/vfio_user_pci.o 00:02:52.260 CXX test/cpp_headers/vfio_user_spec.o 00:02:52.260 CXX test/cpp_headers/vhost.o 00:02:52.260 CXX test/cpp_headers/vmd.o 00:02:52.260 CXX test/cpp_headers/xor.o 00:02:52.260 CXX test/cpp_headers/zipf.o 00:02:52.260 LINK cuse 00:02:57.610 LINK esnap 00:02:57.610 ************************************ 00:02:57.610 END TEST make 00:02:57.610 ************************************ 00:02:57.610 00:02:57.610 real 1m11.486s 00:02:57.610 user 6m37.312s 00:02:57.610 sys 1m14.372s 00:02:57.610 23:40:29 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:57.610 23:40:29 make -- common/autotest_common.sh@10 -- $ set +x 00:02:57.610 23:40:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:57.610 23:40:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:57.610 23:40:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:57.610 23:40:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.610 23:40:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:57.610 23:40:29 -- pm/common@44 -- $ pid=5064 00:02:57.610 23:40:29 -- pm/common@50 -- $ kill -TERM 5064 00:02:57.610 23:40:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.610 23:40:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:57.610 23:40:29 -- pm/common@44 -- $ pid=5065 00:02:57.610 23:40:29 -- pm/common@50 -- $ kill -TERM 5065 00:02:57.610 23:40:29 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:57.610 23:40:29 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.610 23:40:29 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:57.610 23:40:29 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:57.610 23:40:29 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:57.610 23:40:29 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:57.610 23:40:29 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:57.610 23:40:29 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:57.610 23:40:29 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:57.610 23:40:29 -- scripts/common.sh@336 -- # IFS=.-: 00:02:57.610 23:40:29 -- scripts/common.sh@336 -- # read -ra ver1 00:02:57.610 23:40:29 -- scripts/common.sh@337 -- # IFS=.-: 00:02:57.610 23:40:29 -- scripts/common.sh@337 -- # read -ra ver2 00:02:57.610 23:40:29 -- scripts/common.sh@338 -- # local 'op=<' 00:02:57.610 23:40:29 -- scripts/common.sh@340 -- # ver1_l=2 00:02:57.610 23:40:29 -- scripts/common.sh@341 -- # ver2_l=1 00:02:57.610 23:40:29 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:57.610 23:40:29 -- scripts/common.sh@344 -- # case "$op" in 00:02:57.610 23:40:29 -- scripts/common.sh@345 -- # : 1 00:02:57.610 23:40:29 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:57.610 23:40:29 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.610 23:40:29 -- scripts/common.sh@365 -- # decimal 1 00:02:57.610 23:40:29 -- scripts/common.sh@353 -- # local d=1 00:02:57.610 23:40:29 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:57.610 23:40:29 -- scripts/common.sh@355 -- # echo 1 00:02:57.610 23:40:29 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:57.610 23:40:29 -- scripts/common.sh@366 -- # decimal 2 00:02:57.610 23:40:29 -- scripts/common.sh@353 -- # local d=2 00:02:57.610 23:40:29 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:57.610 23:40:29 -- scripts/common.sh@355 -- # echo 2 00:02:57.610 23:40:29 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:57.610 23:40:29 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:57.610 23:40:29 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:57.610 23:40:29 -- scripts/common.sh@368 -- # return 0 00:02:57.610 23:40:29 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:57.610 23:40:29 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:57.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.610 --rc genhtml_branch_coverage=1 00:02:57.610 --rc genhtml_function_coverage=1 00:02:57.610 --rc genhtml_legend=1 00:02:57.610 --rc geninfo_all_blocks=1 00:02:57.610 --rc geninfo_unexecuted_blocks=1 00:02:57.610 00:02:57.610 ' 00:02:57.610 23:40:29 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:57.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.610 --rc genhtml_branch_coverage=1 00:02:57.610 --rc genhtml_function_coverage=1 00:02:57.610 --rc genhtml_legend=1 00:02:57.610 --rc geninfo_all_blocks=1 00:02:57.610 --rc geninfo_unexecuted_blocks=1 00:02:57.610 00:02:57.610 ' 00:02:57.610 23:40:29 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:57.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.610 --rc genhtml_branch_coverage=1 00:02:57.610 --rc genhtml_function_coverage=1 00:02:57.610 --rc genhtml_legend=1 00:02:57.610 --rc geninfo_all_blocks=1 00:02:57.610 --rc geninfo_unexecuted_blocks=1 00:02:57.610 00:02:57.610 ' 00:02:57.610 23:40:29 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:57.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.610 --rc genhtml_branch_coverage=1 00:02:57.610 --rc genhtml_function_coverage=1 00:02:57.610 --rc genhtml_legend=1 00:02:57.610 --rc geninfo_all_blocks=1 00:02:57.610 --rc geninfo_unexecuted_blocks=1 00:02:57.610 00:02:57.610 ' 00:02:57.610 23:40:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:57.610 23:40:29 -- nvmf/common.sh@7 -- # uname -s 00:02:57.610 23:40:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:57.610 23:40:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:57.610 23:40:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:57.610 23:40:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:57.610 23:40:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:57.610 23:40:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:57.610 23:40:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:57.610 23:40:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:57.610 23:40:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:57.610 23:40:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:57.610 23:40:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b8235fc-8e64-42c3-b925-f7853c7e59dc 00:02:57.610 23:40:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b8235fc-8e64-42c3-b925-f7853c7e59dc 00:02:57.610 23:40:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:57.610 23:40:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:57.610 23:40:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:57.610 23:40:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:57.610 23:40:29 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:57.610 23:40:29 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:57.610 23:40:29 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:57.610 23:40:29 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:57.610 23:40:29 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:57.610 23:40:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.610 23:40:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.610 23:40:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.610 23:40:29 -- paths/export.sh@5 -- # export PATH 00:02:57.610 23:40:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.610 23:40:29 -- nvmf/common.sh@51 -- # : 0 00:02:57.610 23:40:29 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:57.610 23:40:29 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:57.610 23:40:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:57.610 23:40:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:57.610 23:40:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:57.610 23:40:29 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:57.610 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:57.610 23:40:29 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:57.610 23:40:29 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:57.610 23:40:29 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:57.610 23:40:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:57.610 23:40:29 -- spdk/autotest.sh@32 -- # uname -s 00:02:57.610 23:40:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:57.610 23:40:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:57.610 23:40:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:57.610 23:40:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:57.610 23:40:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:57.610 23:40:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:57.610 23:40:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:57.610 23:40:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:57.610 23:40:30 -- spdk/autotest.sh@48 -- # udevadm_pid=54279 00:02:57.610 23:40:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:57.610 23:40:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:57.610 23:40:30 -- pm/common@17 -- # local monitor 00:02:57.610 23:40:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.610 23:40:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.610 23:40:30 -- pm/common@25 -- # sleep 1 00:02:57.610 23:40:30 -- pm/common@21 -- # date +%s 00:02:57.610 23:40:30 -- pm/common@21 -- # date +%s 00:02:57.610 23:40:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733442030 00:02:57.610 23:40:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733442030 00:02:57.610 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733442030_collect-vmstat.pm.log 00:02:57.610 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733442030_collect-cpu-load.pm.log 00:02:58.545 23:40:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:58.545 23:40:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:58.545 23:40:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:58.545 23:40:31 -- common/autotest_common.sh@10 -- # set +x 00:02:58.545 23:40:31 -- spdk/autotest.sh@59 -- # create_test_list 00:02:58.545 23:40:31 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:58.545 23:40:31 -- common/autotest_common.sh@10 -- # set +x 00:02:58.545 23:40:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:58.545 23:40:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:58.545 23:40:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:58.545 23:40:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:58.545 23:40:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:58.545 23:40:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:58.545 23:40:31 -- common/autotest_common.sh@1457 -- # uname 00:02:58.545 23:40:31 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:58.545 23:40:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:58.545 23:40:31 -- common/autotest_common.sh@1477 -- # uname 00:02:58.545 23:40:31 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:58.545 23:40:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:58.545 23:40:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:58.545 lcov: LCOV version 1.15 00:02:58.545 23:40:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:13.413 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:13.413 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:28.306 23:41:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:28.306 23:41:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:28.306 23:41:00 -- common/autotest_common.sh@10 -- # set +x 00:03:28.306 23:41:00 -- spdk/autotest.sh@78 -- # rm -f 00:03:28.306 23:41:00 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:28.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:28.306 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:28.306 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:28.306 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:28.306 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:28.306 23:41:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:28.306 23:41:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:28.306 23:41:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:28.306 23:41:00 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:28.306 23:41:00 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:28.306 23:41:00 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:28.306 23:41:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:28.306 23:41:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:28.306 23:41:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:28.306 23:41:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:28.306 23:41:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:28.306 23:41:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.306 23:41:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:28.306 23:41:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:28.306 23:41:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:28.306 23:41:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:28.306 23:41:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:28.306 23:41:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:28.306 23:41:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:28.306 23:41:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:28.306 23:41:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:28.306 23:41:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:03:28.306 23:41:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:28.306 23:41:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:03:28.306 23:41:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:28.306 23:41:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:28.306 23:41:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:28.307 23:41:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:28.307 23:41:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:03:28.307 23:41:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:03:28.307 23:41:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:28.307 23:41:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:28.307 23:41:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:28.307 23:41:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:03:28.307 23:41:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:03:28.307 23:41:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:28.307 23:41:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:28.307 23:41:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:28.307 23:41:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:03:28.307 23:41:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:28.307 23:41:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:03:28.307 23:41:00 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:28.307 23:41:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:28.307 23:41:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:28.307 23:41:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:28.307 23:41:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.307 23:41:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.307 23:41:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:28.307 23:41:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:28.307 23:41:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:28.307 No valid GPT data, bailing 00:03:28.307 23:41:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:28.307 23:41:00 -- scripts/common.sh@394 -- # pt= 00:03:28.307 23:41:00 -- scripts/common.sh@395 -- # return 1 00:03:28.307 23:41:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:28.307 1+0 records in 00:03:28.307 1+0 records out 00:03:28.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422454 s, 248 MB/s 00:03:28.307 23:41:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.307 23:41:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.307 23:41:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:28.307 23:41:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:28.307 23:41:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:28.307 No valid GPT data, bailing 00:03:28.307 23:41:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:28.307 23:41:00 -- scripts/common.sh@394 -- # pt= 00:03:28.307 23:41:00 -- scripts/common.sh@395 -- # return 1 00:03:28.307 23:41:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:28.307 1+0 records in 00:03:28.307 1+0 records out 00:03:28.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00796718 s, 132 MB/s 00:03:28.307 23:41:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.307 23:41:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.307 23:41:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:28.307 23:41:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:28.307 23:41:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:28.565 No valid GPT data, bailing 00:03:28.565 23:41:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:28.565 23:41:01 -- scripts/common.sh@394 -- # pt= 00:03:28.565 23:41:01 -- scripts/common.sh@395 -- # return 1 00:03:28.565 23:41:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:28.565 1+0 records in 00:03:28.565 1+0 records out 00:03:28.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0039536 s, 265 MB/s 00:03:28.565 23:41:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.565 23:41:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.565 23:41:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:28.565 23:41:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:28.565 23:41:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:28.565 No valid GPT data, bailing 00:03:28.565 23:41:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:28.565 23:41:01 -- scripts/common.sh@394 -- # pt= 00:03:28.565 23:41:01 -- scripts/common.sh@395 -- # return 1 00:03:28.565 23:41:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:28.565 1+0 records in 00:03:28.565 1+0 records out 00:03:28.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414696 s, 253 MB/s 00:03:28.565 23:41:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.565 23:41:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.565 23:41:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:28.565 23:41:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:28.565 23:41:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:28.565 No valid GPT data, bailing 00:03:28.565 23:41:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:28.565 23:41:01 -- scripts/common.sh@394 -- # pt= 00:03:28.565 23:41:01 -- scripts/common.sh@395 -- # return 1 00:03:28.565 23:41:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:28.565 1+0 records in 00:03:28.565 1+0 records out 00:03:28.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.003529 s, 297 MB/s 00:03:28.565 23:41:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.565 23:41:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.565 23:41:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:28.565 23:41:01 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:28.565 23:41:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:28.565 No valid GPT data, bailing 00:03:28.565 23:41:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:28.565 23:41:01 -- scripts/common.sh@394 -- # pt= 00:03:28.565 23:41:01 -- scripts/common.sh@395 -- # return 1 00:03:28.565 23:41:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:28.565 1+0 records in 00:03:28.565 1+0 records out 00:03:28.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00390928 s, 268 MB/s 00:03:28.565 23:41:01 -- spdk/autotest.sh@105 -- # sync 00:03:28.823 23:41:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:28.823 23:41:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:28.823 23:41:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:30.193 23:41:02 -- spdk/autotest.sh@111 -- # uname -s 00:03:30.193 23:41:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:30.193 23:41:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:30.193 23:41:02 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:30.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:31.014 Hugepages 00:03:31.014 node hugesize free / total 00:03:31.014 node0 1048576kB 0 / 0 00:03:31.014 node0 2048kB 0 / 0 00:03:31.014 00:03:31.014 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.014 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:31.014 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:31.014 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:31.014 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:31.271 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:31.271 23:41:03 -- spdk/autotest.sh@117 -- # uname -s 00:03:31.271 23:41:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:31.271 23:41:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:31.271 23:41:03 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:31.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:32.112 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.112 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.112 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.112 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.112 23:41:04 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:33.484 23:41:05 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:33.484 23:41:05 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:33.484 23:41:05 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:33.484 23:41:05 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:33.484 23:41:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:33.484 23:41:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:33.484 23:41:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:33.484 23:41:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:33.484 23:41:05 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:33.484 23:41:05 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:33.484 23:41:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:33.484 23:41:05 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:33.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.742 Waiting for block devices as requested 00:03:33.742 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:33.742 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:33.742 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:33.742 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:39.007 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:39.007 23:41:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:39.007 23:41:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:39.007 23:41:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:39.007 23:41:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:39.007 23:41:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:39.007 23:41:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:39.007 23:41:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:39.007 23:41:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:39.007 23:41:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1543 -- # continue 00:03:39.007 23:41:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:39.007 23:41:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:39.007 23:41:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:39.007 23:41:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:39.007 23:41:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:39.007 23:41:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:39.007 23:41:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:39.007 23:41:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:39.007 23:41:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1543 -- # continue 00:03:39.007 23:41:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:39.007 23:41:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:39.007 23:41:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:39.007 23:41:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:39.007 23:41:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1543 -- # continue 00:03:39.007 23:41:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:39.007 23:41:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:03:39.007 23:41:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:39.007 23:41:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:39.007 23:41:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:03:39.007 23:41:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:03:39.007 23:41:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:39.007 23:41:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:39.007 23:41:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:39.007 23:41:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:39.007 23:41:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:03:39.008 23:41:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:39.008 23:41:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:39.008 23:41:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:39.008 23:41:11 -- common/autotest_common.sh@1543 -- # continue 00:03:39.008 23:41:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:39.008 23:41:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:39.008 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:03:39.008 23:41:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:39.008 23:41:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:39.008 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:03:39.008 23:41:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:39.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:39.833 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.833 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.833 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.833 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.091 23:41:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:40.091 23:41:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.091 23:41:12 -- common/autotest_common.sh@10 -- # set +x 00:03:40.091 23:41:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:40.091 23:41:12 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:40.091 23:41:12 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:40.091 23:41:12 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:40.091 23:41:12 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:40.091 23:41:12 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:40.091 23:41:12 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:40.091 23:41:12 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:40.091 23:41:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:40.091 23:41:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:40.091 23:41:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:40.091 23:41:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:40.091 23:41:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:40.091 23:41:12 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:40.091 23:41:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:40.091 23:41:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:40.091 23:41:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:40.091 23:41:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:40.091 23:41:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:40.091 23:41:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:40.091 23:41:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:40.091 23:41:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:40.091 23:41:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:40.091 23:41:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:40.091 23:41:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:40.091 23:41:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:40.091 23:41:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:40.091 23:41:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:40.091 23:41:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:40.091 23:41:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:40.091 23:41:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:40.091 23:41:12 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:40.091 23:41:12 -- common/autotest_common.sh@1572 -- # return 0 00:03:40.091 23:41:12 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:40.091 23:41:12 -- common/autotest_common.sh@1580 -- # return 0 00:03:40.091 23:41:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:40.091 23:41:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:40.091 23:41:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.091 23:41:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.091 23:41:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:40.091 23:41:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.091 23:41:12 -- common/autotest_common.sh@10 -- # set +x 00:03:40.091 23:41:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:40.091 23:41:12 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:40.091 23:41:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.091 23:41:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.091 23:41:12 -- common/autotest_common.sh@10 -- # set +x 00:03:40.091 ************************************ 00:03:40.091 START TEST env 00:03:40.091 ************************************ 00:03:40.091 23:41:12 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:40.091 * Looking for test storage... 00:03:40.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:40.091 23:41:12 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:40.091 23:41:12 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:40.091 23:41:12 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:40.349 23:41:12 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:40.350 23:41:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.350 23:41:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.350 23:41:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.350 23:41:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.350 23:41:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.350 23:41:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.350 23:41:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.350 23:41:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.350 23:41:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.350 23:41:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.350 23:41:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.350 23:41:12 env -- scripts/common.sh@344 -- # case "$op" in 00:03:40.350 23:41:12 env -- scripts/common.sh@345 -- # : 1 00:03:40.350 23:41:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.350 23:41:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.350 23:41:12 env -- scripts/common.sh@365 -- # decimal 1 00:03:40.350 23:41:12 env -- scripts/common.sh@353 -- # local d=1 00:03:40.350 23:41:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.350 23:41:12 env -- scripts/common.sh@355 -- # echo 1 00:03:40.350 23:41:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.350 23:41:12 env -- scripts/common.sh@366 -- # decimal 2 00:03:40.350 23:41:12 env -- scripts/common.sh@353 -- # local d=2 00:03:40.350 23:41:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.350 23:41:12 env -- scripts/common.sh@355 -- # echo 2 00:03:40.350 23:41:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.350 23:41:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.350 23:41:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.350 23:41:12 env -- scripts/common.sh@368 -- # return 0 00:03:40.350 23:41:12 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.350 23:41:12 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:40.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.350 --rc genhtml_branch_coverage=1 00:03:40.350 --rc genhtml_function_coverage=1 00:03:40.350 --rc genhtml_legend=1 00:03:40.350 --rc geninfo_all_blocks=1 00:03:40.350 --rc geninfo_unexecuted_blocks=1 00:03:40.350 00:03:40.350 ' 00:03:40.350 23:41:12 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:40.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.350 --rc genhtml_branch_coverage=1 00:03:40.350 --rc genhtml_function_coverage=1 00:03:40.350 --rc genhtml_legend=1 00:03:40.350 --rc geninfo_all_blocks=1 00:03:40.350 --rc geninfo_unexecuted_blocks=1 00:03:40.350 00:03:40.350 ' 00:03:40.350 23:41:12 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:40.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.350 --rc genhtml_branch_coverage=1 00:03:40.350 --rc genhtml_function_coverage=1 00:03:40.350 --rc genhtml_legend=1 00:03:40.350 --rc geninfo_all_blocks=1 00:03:40.350 --rc geninfo_unexecuted_blocks=1 00:03:40.350 00:03:40.350 ' 00:03:40.350 23:41:12 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:40.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.350 --rc genhtml_branch_coverage=1 00:03:40.350 --rc genhtml_function_coverage=1 00:03:40.350 --rc genhtml_legend=1 00:03:40.350 --rc geninfo_all_blocks=1 00:03:40.350 --rc geninfo_unexecuted_blocks=1 00:03:40.350 00:03:40.350 ' 00:03:40.350 23:41:12 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:40.350 23:41:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.350 23:41:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.350 23:41:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.350 ************************************ 00:03:40.350 START TEST env_memory 00:03:40.350 ************************************ 00:03:40.350 23:41:12 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:40.350 00:03:40.350 00:03:40.350 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.350 http://cunit.sourceforge.net/ 00:03:40.350 00:03:40.350 00:03:40.350 Suite: memory 00:03:40.350 Test: alloc and free memory map ...[2024-12-05 23:41:12.866712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:40.350 passed 00:03:40.350 Test: mem map translation ...[2024-12-05 23:41:12.897650] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:40.350 [2024-12-05 23:41:12.898047] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:40.350 [2024-12-05 23:41:12.898316] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:40.350 [2024-12-05 23:41:12.898544] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:40.350 passed 00:03:40.350 Test: mem map registration ...[2024-12-05 23:41:12.952400] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:40.350 [2024-12-05 23:41:12.952784] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:40.350 passed 00:03:40.350 Test: mem map adjacent registrations ...passed 00:03:40.350 00:03:40.350 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.350 suites 1 1 n/a 0 0 00:03:40.350 tests 4 4 4 0 0 00:03:40.350 asserts 152 152 152 0 n/a 00:03:40.350 00:03:40.350 Elapsed time = 0.181 seconds 00:03:40.350 00:03:40.350 real 0m0.212s 00:03:40.350 user 0m0.188s 00:03:40.350 sys 0m0.013s 00:03:40.350 23:41:13 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.350 23:41:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:40.350 ************************************ 00:03:40.350 END TEST env_memory 00:03:40.350 ************************************ 00:03:40.609 23:41:13 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:40.609 23:41:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.609 23:41:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.609 23:41:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.609 ************************************ 00:03:40.609 START TEST env_vtophys 00:03:40.609 ************************************ 00:03:40.609 23:41:13 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:40.609 EAL: lib.eal log level changed from notice to debug 00:03:40.609 EAL: Detected lcore 0 as core 0 on socket 0 00:03:40.609 EAL: Detected lcore 1 as core 0 on socket 0 00:03:40.609 EAL: Detected lcore 2 as core 0 on socket 0 00:03:40.609 EAL: Detected lcore 3 as core 0 on socket 0 00:03:40.609 EAL: Detected lcore 4 as core 0 on socket 0 00:03:40.609 EAL: Detected lcore 5 as core 0 on socket 0 00:03:40.609 EAL: Detected lcore 6 as core 0 on socket 0 00:03:40.609 EAL: Detected lcore 7 as core 0 on socket 0 00:03:40.609 EAL: Detected lcore 8 as core 0 on socket 0 00:03:40.609 EAL: Detected lcore 9 as core 0 on socket 0 00:03:40.609 EAL: Maximum logical cores by configuration: 128 00:03:40.609 EAL: Detected CPU lcores: 10 00:03:40.609 EAL: Detected NUMA nodes: 1 00:03:40.609 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:40.609 EAL: Detected shared linkage of DPDK 00:03:40.609 EAL: No shared files mode enabled, IPC will be disabled 00:03:40.609 EAL: Selected IOVA mode 'PA' 00:03:40.609 EAL: Probing VFIO support... 00:03:40.609 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:40.609 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:40.609 EAL: Ask a virtual area of 0x2e000 bytes 00:03:40.609 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:40.609 EAL: Setting up physically contiguous memory... 00:03:40.609 EAL: Setting maximum number of open files to 524288 00:03:40.609 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:40.609 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:40.609 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.609 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:40.609 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.609 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.609 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:40.609 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:40.609 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.609 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:40.609 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.609 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.609 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:40.609 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:40.609 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.609 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:40.609 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.609 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.609 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:40.609 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:40.609 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.609 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:40.609 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.609 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.609 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:40.609 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:40.609 EAL: Hugepages will be freed exactly as allocated. 00:03:40.609 EAL: No shared files mode enabled, IPC is disabled 00:03:40.609 EAL: No shared files mode enabled, IPC is disabled 00:03:40.609 EAL: TSC frequency is ~2600000 KHz 00:03:40.609 EAL: Main lcore 0 is ready (tid=7fcb1fcbaa40;cpuset=[0]) 00:03:40.609 EAL: Trying to obtain current memory policy. 00:03:40.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.609 EAL: Restoring previous memory policy: 0 00:03:40.609 EAL: request: mp_malloc_sync 00:03:40.609 EAL: No shared files mode enabled, IPC is disabled 00:03:40.609 EAL: Heap on socket 0 was expanded by 2MB 00:03:40.609 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:40.609 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:40.609 EAL: Mem event callback 'spdk:(nil)' registered 00:03:40.609 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:40.609 00:03:40.609 00:03:40.609 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.609 http://cunit.sourceforge.net/ 00:03:40.609 00:03:40.609 00:03:40.609 Suite: components_suite 00:03:41.176 Test: vtophys_malloc_test ...passed 00:03:41.176 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:41.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.176 EAL: Restoring previous memory policy: 4 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was expanded by 4MB 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was shrunk by 4MB 00:03:41.176 EAL: Trying to obtain current memory policy. 00:03:41.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.176 EAL: Restoring previous memory policy: 4 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was expanded by 6MB 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was shrunk by 6MB 00:03:41.176 EAL: Trying to obtain current memory policy. 00:03:41.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.176 EAL: Restoring previous memory policy: 4 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was expanded by 10MB 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was shrunk by 10MB 00:03:41.176 EAL: Trying to obtain current memory policy. 00:03:41.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.176 EAL: Restoring previous memory policy: 4 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was expanded by 18MB 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was shrunk by 18MB 00:03:41.176 EAL: Trying to obtain current memory policy. 00:03:41.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.176 EAL: Restoring previous memory policy: 4 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was expanded by 34MB 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was shrunk by 34MB 00:03:41.176 EAL: Trying to obtain current memory policy. 00:03:41.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.176 EAL: Restoring previous memory policy: 4 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was expanded by 66MB 00:03:41.176 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.176 EAL: request: mp_malloc_sync 00:03:41.176 EAL: No shared files mode enabled, IPC is disabled 00:03:41.176 EAL: Heap on socket 0 was shrunk by 66MB 00:03:41.434 EAL: Trying to obtain current memory policy. 00:03:41.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.435 EAL: Restoring previous memory policy: 4 00:03:41.435 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.435 EAL: request: mp_malloc_sync 00:03:41.435 EAL: No shared files mode enabled, IPC is disabled 00:03:41.435 EAL: Heap on socket 0 was expanded by 130MB 00:03:41.435 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.435 EAL: request: mp_malloc_sync 00:03:41.435 EAL: No shared files mode enabled, IPC is disabled 00:03:41.435 EAL: Heap on socket 0 was shrunk by 130MB 00:03:41.693 EAL: Trying to obtain current memory policy. 00:03:41.693 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.693 EAL: Restoring previous memory policy: 4 00:03:41.693 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.693 EAL: request: mp_malloc_sync 00:03:41.693 EAL: No shared files mode enabled, IPC is disabled 00:03:41.693 EAL: Heap on socket 0 was expanded by 258MB 00:03:41.950 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.950 EAL: request: mp_malloc_sync 00:03:41.950 EAL: No shared files mode enabled, IPC is disabled 00:03:41.950 EAL: Heap on socket 0 was shrunk by 258MB 00:03:42.208 EAL: Trying to obtain current memory policy. 00:03:42.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.464 EAL: Restoring previous memory policy: 4 00:03:42.464 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.464 EAL: request: mp_malloc_sync 00:03:42.464 EAL: No shared files mode enabled, IPC is disabled 00:03:42.464 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.058 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.058 EAL: request: mp_malloc_sync 00:03:43.058 EAL: No shared files mode enabled, IPC is disabled 00:03:43.058 EAL: Heap on socket 0 was shrunk by 514MB 00:03:43.627 EAL: Trying to obtain current memory policy. 00:03:43.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.627 EAL: Restoring previous memory policy: 4 00:03:43.627 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.627 EAL: request: mp_malloc_sync 00:03:43.627 EAL: No shared files mode enabled, IPC is disabled 00:03:43.627 EAL: Heap on socket 0 was expanded by 1026MB 00:03:45.003 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.003 EAL: request: mp_malloc_sync 00:03:45.003 EAL: No shared files mode enabled, IPC is disabled 00:03:45.003 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:45.953 passed 00:03:45.953 00:03:45.953 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.953 suites 1 1 n/a 0 0 00:03:45.953 tests 2 2 2 0 0 00:03:45.953 asserts 5866 5866 5866 0 n/a 00:03:45.953 00:03:45.953 Elapsed time = 5.103 seconds 00:03:45.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.953 EAL: request: mp_malloc_sync 00:03:45.953 EAL: No shared files mode enabled, IPC is disabled 00:03:45.953 EAL: Heap on socket 0 was shrunk by 2MB 00:03:45.953 EAL: No shared files mode enabled, IPC is disabled 00:03:45.953 EAL: No shared files mode enabled, IPC is disabled 00:03:45.953 EAL: No shared files mode enabled, IPC is disabled 00:03:45.953 00:03:45.953 real 0m5.378s 00:03:45.953 user 0m4.502s 00:03:45.953 sys 0m0.719s 00:03:45.953 ************************************ 00:03:45.953 END TEST env_vtophys 00:03:45.953 ************************************ 00:03:45.953 23:41:18 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.953 23:41:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:45.953 23:41:18 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:45.953 23:41:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.953 23:41:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.953 23:41:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.953 ************************************ 00:03:45.953 START TEST env_pci 00:03:45.953 ************************************ 00:03:45.953 23:41:18 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:45.953 00:03:45.953 00:03:45.953 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.953 http://cunit.sourceforge.net/ 00:03:45.953 00:03:45.953 00:03:45.953 Suite: pci 00:03:45.953 Test: pci_hook ...[2024-12-05 23:41:18.514752] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57032 has claimed it 00:03:45.953 passed 00:03:45.953 00:03:45.953 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.953 suites 1 1 n/a 0 0 00:03:45.953 tests 1 1 1 0 0 00:03:45.953 asserts 25 25 25 0 n/a 00:03:45.953 00:03:45.953 Elapsed time = 0.004 seconds 00:03:45.953 EAL: Cannot find device (10000:00:01.0) 00:03:45.953 EAL: Failed to attach device on primary process 00:03:45.953 ************************************ 00:03:45.953 END TEST env_pci 00:03:45.953 ************************************ 00:03:45.953 00:03:45.953 real 0m0.056s 00:03:45.953 user 0m0.029s 00:03:45.953 sys 0m0.027s 00:03:45.953 23:41:18 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.953 23:41:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:45.953 23:41:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:45.953 23:41:18 env -- env/env.sh@15 -- # uname 00:03:45.953 23:41:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:45.953 23:41:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:45.953 23:41:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.953 23:41:18 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:45.953 23:41:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.953 23:41:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.953 ************************************ 00:03:45.953 START TEST env_dpdk_post_init 00:03:45.953 ************************************ 00:03:45.953 23:41:18 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.953 EAL: Detected CPU lcores: 10 00:03:45.953 EAL: Detected NUMA nodes: 1 00:03:45.953 EAL: Detected shared linkage of DPDK 00:03:45.953 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.953 EAL: Selected IOVA mode 'PA' 00:03:46.211 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.211 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:46.211 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:46.211 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:03:46.211 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:03:46.211 Starting DPDK initialization... 00:03:46.211 Starting SPDK post initialization... 00:03:46.211 SPDK NVMe probe 00:03:46.211 Attaching to 0000:00:10.0 00:03:46.211 Attaching to 0000:00:11.0 00:03:46.211 Attaching to 0000:00:12.0 00:03:46.211 Attaching to 0000:00:13.0 00:03:46.211 Attached to 0000:00:10.0 00:03:46.211 Attached to 0000:00:11.0 00:03:46.211 Attached to 0000:00:13.0 00:03:46.211 Attached to 0000:00:12.0 00:03:46.211 Cleaning up... 00:03:46.211 ************************************ 00:03:46.211 END TEST env_dpdk_post_init 00:03:46.211 ************************************ 00:03:46.211 00:03:46.211 real 0m0.249s 00:03:46.211 user 0m0.079s 00:03:46.211 sys 0m0.071s 00:03:46.211 23:41:18 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.211 23:41:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.211 23:41:18 env -- env/env.sh@26 -- # uname 00:03:46.211 23:41:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:46.211 23:41:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.211 23:41:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.211 23:41:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.211 23:41:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.211 ************************************ 00:03:46.211 START TEST env_mem_callbacks 00:03:46.211 ************************************ 00:03:46.211 23:41:18 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.211 EAL: Detected CPU lcores: 10 00:03:46.211 EAL: Detected NUMA nodes: 1 00:03:46.211 EAL: Detected shared linkage of DPDK 00:03:46.470 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:46.470 EAL: Selected IOVA mode 'PA' 00:03:46.470 00:03:46.470 00:03:46.470 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.470 http://cunit.sourceforge.net/ 00:03:46.470 00:03:46.470 00:03:46.470 Suite: memory 00:03:46.470 Test: test ... 00:03:46.470 register 0x200000200000 2097152 00:03:46.470 malloc 3145728 00:03:46.470 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.470 register 0x200000400000 4194304 00:03:46.470 buf 0x2000004fffc0 len 3145728 PASSED 00:03:46.470 malloc 64 00:03:46.470 buf 0x2000004ffec0 len 64 PASSED 00:03:46.470 malloc 4194304 00:03:46.470 register 0x200000800000 6291456 00:03:46.470 buf 0x2000009fffc0 len 4194304 PASSED 00:03:46.470 free 0x2000004fffc0 3145728 00:03:46.470 free 0x2000004ffec0 64 00:03:46.470 unregister 0x200000400000 4194304 PASSED 00:03:46.470 free 0x2000009fffc0 4194304 00:03:46.470 unregister 0x200000800000 6291456 PASSED 00:03:46.470 malloc 8388608 00:03:46.470 register 0x200000400000 10485760 00:03:46.470 buf 0x2000005fffc0 len 8388608 PASSED 00:03:46.470 free 0x2000005fffc0 8388608 00:03:46.470 unregister 0x200000400000 10485760 PASSED 00:03:46.470 passed 00:03:46.470 00:03:46.470 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.470 suites 1 1 n/a 0 0 00:03:46.470 tests 1 1 1 0 0 00:03:46.470 asserts 15 15 15 0 n/a 00:03:46.470 00:03:46.470 Elapsed time = 0.042 seconds 00:03:46.470 00:03:46.470 real 0m0.212s 00:03:46.470 user 0m0.062s 00:03:46.470 sys 0m0.046s 00:03:46.470 23:41:19 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.470 23:41:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:46.470 ************************************ 00:03:46.470 END TEST env_mem_callbacks 00:03:46.470 ************************************ 00:03:46.470 00:03:46.470 real 0m6.443s 00:03:46.470 user 0m5.006s 00:03:46.470 sys 0m1.063s 00:03:46.470 23:41:19 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.470 ************************************ 00:03:46.470 END TEST env 00:03:46.470 ************************************ 00:03:46.470 23:41:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.470 23:41:19 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:46.470 23:41:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.470 23:41:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.470 23:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:46.470 ************************************ 00:03:46.470 START TEST rpc 00:03:46.470 ************************************ 00:03:46.470 23:41:19 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:46.729 * Looking for test storage... 00:03:46.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:46.729 23:41:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.729 23:41:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.729 23:41:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.729 23:41:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.729 23:41:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.729 23:41:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.729 23:41:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.729 23:41:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.729 23:41:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.729 23:41:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.729 23:41:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.729 23:41:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:46.729 23:41:19 rpc -- scripts/common.sh@345 -- # : 1 00:03:46.729 23:41:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.729 23:41:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.729 23:41:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:46.729 23:41:19 rpc -- scripts/common.sh@353 -- # local d=1 00:03:46.729 23:41:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.729 23:41:19 rpc -- scripts/common.sh@355 -- # echo 1 00:03:46.729 23:41:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.729 23:41:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:46.729 23:41:19 rpc -- scripts/common.sh@353 -- # local d=2 00:03:46.729 23:41:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.729 23:41:19 rpc -- scripts/common.sh@355 -- # echo 2 00:03:46.729 23:41:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.729 23:41:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.729 23:41:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.729 23:41:19 rpc -- scripts/common.sh@368 -- # return 0 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:46.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.729 --rc genhtml_branch_coverage=1 00:03:46.729 --rc genhtml_function_coverage=1 00:03:46.729 --rc genhtml_legend=1 00:03:46.729 --rc geninfo_all_blocks=1 00:03:46.729 --rc geninfo_unexecuted_blocks=1 00:03:46.729 00:03:46.729 ' 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:46.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.729 --rc genhtml_branch_coverage=1 00:03:46.729 --rc genhtml_function_coverage=1 00:03:46.729 --rc genhtml_legend=1 00:03:46.729 --rc geninfo_all_blocks=1 00:03:46.729 --rc geninfo_unexecuted_blocks=1 00:03:46.729 00:03:46.729 ' 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:46.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.729 --rc genhtml_branch_coverage=1 00:03:46.729 --rc genhtml_function_coverage=1 00:03:46.729 --rc genhtml_legend=1 00:03:46.729 --rc geninfo_all_blocks=1 00:03:46.729 --rc geninfo_unexecuted_blocks=1 00:03:46.729 00:03:46.729 ' 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:46.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.729 --rc genhtml_branch_coverage=1 00:03:46.729 --rc genhtml_function_coverage=1 00:03:46.729 --rc genhtml_legend=1 00:03:46.729 --rc geninfo_all_blocks=1 00:03:46.729 --rc geninfo_unexecuted_blocks=1 00:03:46.729 00:03:46.729 ' 00:03:46.729 23:41:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57158 00:03:46.729 23:41:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.729 23:41:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57158 00:03:46.729 23:41:19 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@835 -- # '[' -z 57158 ']' 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:46.729 23:41:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.729 [2024-12-05 23:41:19.394270] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:03:46.729 [2024-12-05 23:41:19.394413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57158 ] 00:03:46.988 [2024-12-05 23:41:19.550393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.988 [2024-12-05 23:41:19.636262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:46.988 [2024-12-05 23:41:19.636324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57158' to capture a snapshot of events at runtime. 00:03:46.988 [2024-12-05 23:41:19.636337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:46.988 [2024-12-05 23:41:19.636349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:46.988 [2024-12-05 23:41:19.636358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57158 for offline analysis/debug. 00:03:46.988 [2024-12-05 23:41:19.637255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.552 23:41:20 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:47.552 23:41:20 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:47.552 23:41:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:47.552 23:41:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:47.552 23:41:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:47.552 23:41:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:47.552 23:41:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.552 23:41:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.552 23:41:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.552 ************************************ 00:03:47.552 START TEST rpc_integrity 00:03:47.552 ************************************ 00:03:47.552 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:47.552 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.552 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.552 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.552 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.552 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.552 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.811 { 00:03:47.811 "name": "Malloc0", 00:03:47.811 "aliases": [ 00:03:47.811 "6bc7294f-2570-4f68-b631-21d98f237ca3" 00:03:47.811 ], 00:03:47.811 "product_name": "Malloc disk", 00:03:47.811 "block_size": 512, 00:03:47.811 "num_blocks": 16384, 00:03:47.811 "uuid": "6bc7294f-2570-4f68-b631-21d98f237ca3", 00:03:47.811 "assigned_rate_limits": { 00:03:47.811 "rw_ios_per_sec": 0, 00:03:47.811 "rw_mbytes_per_sec": 0, 00:03:47.811 "r_mbytes_per_sec": 0, 00:03:47.811 "w_mbytes_per_sec": 0 00:03:47.811 }, 00:03:47.811 "claimed": false, 00:03:47.811 "zoned": false, 00:03:47.811 "supported_io_types": { 00:03:47.811 "read": true, 00:03:47.811 "write": true, 00:03:47.811 "unmap": true, 00:03:47.811 "flush": true, 00:03:47.811 "reset": true, 00:03:47.811 "nvme_admin": false, 00:03:47.811 "nvme_io": false, 00:03:47.811 "nvme_io_md": false, 00:03:47.811 "write_zeroes": true, 00:03:47.811 "zcopy": true, 00:03:47.811 "get_zone_info": false, 00:03:47.811 "zone_management": false, 00:03:47.811 "zone_append": false, 00:03:47.811 "compare": false, 00:03:47.811 "compare_and_write": false, 00:03:47.811 "abort": true, 00:03:47.811 "seek_hole": false, 00:03:47.811 "seek_data": false, 00:03:47.811 "copy": true, 00:03:47.811 "nvme_iov_md": false 00:03:47.811 }, 00:03:47.811 "memory_domains": [ 00:03:47.811 { 00:03:47.811 "dma_device_id": "system", 00:03:47.811 "dma_device_type": 1 00:03:47.811 }, 00:03:47.811 { 00:03:47.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.811 "dma_device_type": 2 00:03:47.811 } 00:03:47.811 ], 00:03:47.811 "driver_specific": {} 00:03:47.811 } 00:03:47.811 ]' 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.811 [2024-12-05 23:41:20.335425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:47.811 [2024-12-05 23:41:20.335489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.811 [2024-12-05 23:41:20.335513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:03:47.811 [2024-12-05 23:41:20.335524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.811 [2024-12-05 23:41:20.337430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.811 [2024-12-05 23:41:20.337470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.811 Passthru0 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.811 { 00:03:47.811 "name": "Malloc0", 00:03:47.811 "aliases": [ 00:03:47.811 "6bc7294f-2570-4f68-b631-21d98f237ca3" 00:03:47.811 ], 00:03:47.811 "product_name": "Malloc disk", 00:03:47.811 "block_size": 512, 00:03:47.811 "num_blocks": 16384, 00:03:47.811 "uuid": "6bc7294f-2570-4f68-b631-21d98f237ca3", 00:03:47.811 "assigned_rate_limits": { 00:03:47.811 "rw_ios_per_sec": 0, 00:03:47.811 "rw_mbytes_per_sec": 0, 00:03:47.811 "r_mbytes_per_sec": 0, 00:03:47.811 "w_mbytes_per_sec": 0 00:03:47.811 }, 00:03:47.811 "claimed": true, 00:03:47.811 "claim_type": "exclusive_write", 00:03:47.811 "zoned": false, 00:03:47.811 "supported_io_types": { 00:03:47.811 "read": true, 00:03:47.811 "write": true, 00:03:47.811 "unmap": true, 00:03:47.811 "flush": true, 00:03:47.811 "reset": true, 00:03:47.811 "nvme_admin": false, 00:03:47.811 "nvme_io": false, 00:03:47.811 "nvme_io_md": false, 00:03:47.811 "write_zeroes": true, 00:03:47.811 "zcopy": true, 00:03:47.811 "get_zone_info": false, 00:03:47.811 "zone_management": false, 00:03:47.811 "zone_append": false, 00:03:47.811 "compare": false, 00:03:47.811 "compare_and_write": false, 00:03:47.811 "abort": true, 00:03:47.811 "seek_hole": false, 00:03:47.811 "seek_data": false, 00:03:47.811 "copy": true, 00:03:47.811 "nvme_iov_md": false 00:03:47.811 }, 00:03:47.811 "memory_domains": [ 00:03:47.811 { 00:03:47.811 "dma_device_id": "system", 00:03:47.811 "dma_device_type": 1 00:03:47.811 }, 00:03:47.811 { 00:03:47.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.811 "dma_device_type": 2 00:03:47.811 } 00:03:47.811 ], 00:03:47.811 "driver_specific": {} 00:03:47.811 }, 00:03:47.811 { 00:03:47.811 "name": "Passthru0", 00:03:47.811 "aliases": [ 00:03:47.811 "1a395756-710d-5f24-a0fc-fe8eaff98714" 00:03:47.811 ], 00:03:47.811 "product_name": "passthru", 00:03:47.811 "block_size": 512, 00:03:47.811 "num_blocks": 16384, 00:03:47.811 "uuid": "1a395756-710d-5f24-a0fc-fe8eaff98714", 00:03:47.811 "assigned_rate_limits": { 00:03:47.811 "rw_ios_per_sec": 0, 00:03:47.811 "rw_mbytes_per_sec": 0, 00:03:47.811 "r_mbytes_per_sec": 0, 00:03:47.811 "w_mbytes_per_sec": 0 00:03:47.811 }, 00:03:47.811 "claimed": false, 00:03:47.811 "zoned": false, 00:03:47.811 "supported_io_types": { 00:03:47.811 "read": true, 00:03:47.811 "write": true, 00:03:47.811 "unmap": true, 00:03:47.811 "flush": true, 00:03:47.811 "reset": true, 00:03:47.811 "nvme_admin": false, 00:03:47.811 "nvme_io": false, 00:03:47.811 "nvme_io_md": false, 00:03:47.811 "write_zeroes": true, 00:03:47.811 "zcopy": true, 00:03:47.811 "get_zone_info": false, 00:03:47.811 "zone_management": false, 00:03:47.811 "zone_append": false, 00:03:47.811 "compare": false, 00:03:47.811 "compare_and_write": false, 00:03:47.811 "abort": true, 00:03:47.811 "seek_hole": false, 00:03:47.811 "seek_data": false, 00:03:47.811 "copy": true, 00:03:47.811 "nvme_iov_md": false 00:03:47.811 }, 00:03:47.811 "memory_domains": [ 00:03:47.811 { 00:03:47.811 "dma_device_id": "system", 00:03:47.811 "dma_device_type": 1 00:03:47.811 }, 00:03:47.811 { 00:03:47.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.811 "dma_device_type": 2 00:03:47.811 } 00:03:47.811 ], 00:03:47.811 "driver_specific": { 00:03:47.811 "passthru": { 00:03:47.811 "name": "Passthru0", 00:03:47.811 "base_bdev_name": "Malloc0" 00:03:47.811 } 00:03:47.811 } 00:03:47.811 } 00:03:47.811 ]' 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.811 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.811 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:47.812 ************************************ 00:03:47.812 END TEST rpc_integrity 00:03:47.812 ************************************ 00:03:47.812 23:41:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.812 00:03:47.812 real 0m0.218s 00:03:47.812 user 0m0.115s 00:03:47.812 sys 0m0.025s 00:03:47.812 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.812 23:41:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.812 23:41:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:47.812 23:41:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.812 23:41:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.812 23:41:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.812 ************************************ 00:03:47.812 START TEST rpc_plugins 00:03:47.812 ************************************ 00:03:47.812 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:47.812 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:47.812 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.812 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.812 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.812 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:47.812 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:47.812 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.812 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.812 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.812 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:47.812 { 00:03:47.812 "name": "Malloc1", 00:03:47.812 "aliases": [ 00:03:47.812 "abdd996e-cf5e-4a74-aa1f-c191583d3bc0" 00:03:47.812 ], 00:03:47.812 "product_name": "Malloc disk", 00:03:47.812 "block_size": 4096, 00:03:47.812 "num_blocks": 256, 00:03:47.812 "uuid": "abdd996e-cf5e-4a74-aa1f-c191583d3bc0", 00:03:47.812 "assigned_rate_limits": { 00:03:47.812 "rw_ios_per_sec": 0, 00:03:47.812 "rw_mbytes_per_sec": 0, 00:03:47.812 "r_mbytes_per_sec": 0, 00:03:47.812 "w_mbytes_per_sec": 0 00:03:47.812 }, 00:03:47.812 "claimed": false, 00:03:47.812 "zoned": false, 00:03:47.812 "supported_io_types": { 00:03:47.812 "read": true, 00:03:47.812 "write": true, 00:03:47.812 "unmap": true, 00:03:47.812 "flush": true, 00:03:47.812 "reset": true, 00:03:47.812 "nvme_admin": false, 00:03:47.812 "nvme_io": false, 00:03:47.812 "nvme_io_md": false, 00:03:47.812 "write_zeroes": true, 00:03:47.812 "zcopy": true, 00:03:47.812 "get_zone_info": false, 00:03:47.812 "zone_management": false, 00:03:47.812 "zone_append": false, 00:03:47.812 "compare": false, 00:03:47.812 "compare_and_write": false, 00:03:47.812 "abort": true, 00:03:47.812 "seek_hole": false, 00:03:47.812 "seek_data": false, 00:03:47.812 "copy": true, 00:03:47.812 "nvme_iov_md": false 00:03:47.812 }, 00:03:47.812 "memory_domains": [ 00:03:47.812 { 00:03:47.812 "dma_device_id": "system", 00:03:47.812 "dma_device_type": 1 00:03:47.812 }, 00:03:47.812 { 00:03:47.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.812 "dma_device_type": 2 00:03:47.812 } 00:03:47.812 ], 00:03:47.812 "driver_specific": {} 00:03:47.812 } 00:03:47.812 ]' 00:03:47.812 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:48.070 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:48.070 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:48.070 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.070 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.070 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.070 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:48.070 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.070 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.070 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.070 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:48.070 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:48.070 ************************************ 00:03:48.070 END TEST rpc_plugins 00:03:48.070 ************************************ 00:03:48.070 23:41:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:48.070 00:03:48.070 real 0m0.109s 00:03:48.070 user 0m0.064s 00:03:48.070 sys 0m0.013s 00:03:48.070 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.070 23:41:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.070 23:41:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:48.070 23:41:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.070 23:41:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.070 23:41:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.070 ************************************ 00:03:48.070 START TEST rpc_trace_cmd_test 00:03:48.070 ************************************ 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:48.070 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57158", 00:03:48.070 "tpoint_group_mask": "0x8", 00:03:48.070 "iscsi_conn": { 00:03:48.070 "mask": "0x2", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "scsi": { 00:03:48.070 "mask": "0x4", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "bdev": { 00:03:48.070 "mask": "0x8", 00:03:48.070 "tpoint_mask": "0xffffffffffffffff" 00:03:48.070 }, 00:03:48.070 "nvmf_rdma": { 00:03:48.070 "mask": "0x10", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "nvmf_tcp": { 00:03:48.070 "mask": "0x20", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "ftl": { 00:03:48.070 "mask": "0x40", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "blobfs": { 00:03:48.070 "mask": "0x80", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "dsa": { 00:03:48.070 "mask": "0x200", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "thread": { 00:03:48.070 "mask": "0x400", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "nvme_pcie": { 00:03:48.070 "mask": "0x800", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "iaa": { 00:03:48.070 "mask": "0x1000", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "nvme_tcp": { 00:03:48.070 "mask": "0x2000", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "bdev_nvme": { 00:03:48.070 "mask": "0x4000", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "sock": { 00:03:48.070 "mask": "0x8000", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "blob": { 00:03:48.070 "mask": "0x10000", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "bdev_raid": { 00:03:48.070 "mask": "0x20000", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 }, 00:03:48.070 "scheduler": { 00:03:48.070 "mask": "0x40000", 00:03:48.070 "tpoint_mask": "0x0" 00:03:48.070 } 00:03:48.070 }' 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:48.070 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:48.329 ************************************ 00:03:48.329 END TEST rpc_trace_cmd_test 00:03:48.329 ************************************ 00:03:48.329 23:41:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:48.329 00:03:48.329 real 0m0.168s 00:03:48.329 user 0m0.136s 00:03:48.329 sys 0m0.022s 00:03:48.329 23:41:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.329 23:41:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.329 23:41:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:48.329 23:41:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:48.329 23:41:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:48.329 23:41:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.329 23:41:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.329 23:41:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.330 ************************************ 00:03:48.330 START TEST rpc_daemon_integrity 00:03:48.330 ************************************ 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:48.330 { 00:03:48.330 "name": "Malloc2", 00:03:48.330 "aliases": [ 00:03:48.330 "a12035bb-b4bd-4e8a-ae2b-21d785f9cc84" 00:03:48.330 ], 00:03:48.330 "product_name": "Malloc disk", 00:03:48.330 "block_size": 512, 00:03:48.330 "num_blocks": 16384, 00:03:48.330 "uuid": "a12035bb-b4bd-4e8a-ae2b-21d785f9cc84", 00:03:48.330 "assigned_rate_limits": { 00:03:48.330 "rw_ios_per_sec": 0, 00:03:48.330 "rw_mbytes_per_sec": 0, 00:03:48.330 "r_mbytes_per_sec": 0, 00:03:48.330 "w_mbytes_per_sec": 0 00:03:48.330 }, 00:03:48.330 "claimed": false, 00:03:48.330 "zoned": false, 00:03:48.330 "supported_io_types": { 00:03:48.330 "read": true, 00:03:48.330 "write": true, 00:03:48.330 "unmap": true, 00:03:48.330 "flush": true, 00:03:48.330 "reset": true, 00:03:48.330 "nvme_admin": false, 00:03:48.330 "nvme_io": false, 00:03:48.330 "nvme_io_md": false, 00:03:48.330 "write_zeroes": true, 00:03:48.330 "zcopy": true, 00:03:48.330 "get_zone_info": false, 00:03:48.330 "zone_management": false, 00:03:48.330 "zone_append": false, 00:03:48.330 "compare": false, 00:03:48.330 "compare_and_write": false, 00:03:48.330 "abort": true, 00:03:48.330 "seek_hole": false, 00:03:48.330 "seek_data": false, 00:03:48.330 "copy": true, 00:03:48.330 "nvme_iov_md": false 00:03:48.330 }, 00:03:48.330 "memory_domains": [ 00:03:48.330 { 00:03:48.330 "dma_device_id": "system", 00:03:48.330 "dma_device_type": 1 00:03:48.330 }, 00:03:48.330 { 00:03:48.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.330 "dma_device_type": 2 00:03:48.330 } 00:03:48.330 ], 00:03:48.330 "driver_specific": {} 00:03:48.330 } 00:03:48.330 ]' 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.330 [2024-12-05 23:41:20.945508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:48.330 [2024-12-05 23:41:20.945575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.330 [2024-12-05 23:41:20.945593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:03:48.330 [2024-12-05 23:41:20.945602] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.330 [2024-12-05 23:41:20.947435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.330 [2024-12-05 23:41:20.947473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.330 Passthru0 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.330 { 00:03:48.330 "name": "Malloc2", 00:03:48.330 "aliases": [ 00:03:48.330 "a12035bb-b4bd-4e8a-ae2b-21d785f9cc84" 00:03:48.330 ], 00:03:48.330 "product_name": "Malloc disk", 00:03:48.330 "block_size": 512, 00:03:48.330 "num_blocks": 16384, 00:03:48.330 "uuid": "a12035bb-b4bd-4e8a-ae2b-21d785f9cc84", 00:03:48.330 "assigned_rate_limits": { 00:03:48.330 "rw_ios_per_sec": 0, 00:03:48.330 "rw_mbytes_per_sec": 0, 00:03:48.330 "r_mbytes_per_sec": 0, 00:03:48.330 "w_mbytes_per_sec": 0 00:03:48.330 }, 00:03:48.330 "claimed": true, 00:03:48.330 "claim_type": "exclusive_write", 00:03:48.330 "zoned": false, 00:03:48.330 "supported_io_types": { 00:03:48.330 "read": true, 00:03:48.330 "write": true, 00:03:48.330 "unmap": true, 00:03:48.330 "flush": true, 00:03:48.330 "reset": true, 00:03:48.330 "nvme_admin": false, 00:03:48.330 "nvme_io": false, 00:03:48.330 "nvme_io_md": false, 00:03:48.330 "write_zeroes": true, 00:03:48.330 "zcopy": true, 00:03:48.330 "get_zone_info": false, 00:03:48.330 "zone_management": false, 00:03:48.330 "zone_append": false, 00:03:48.330 "compare": false, 00:03:48.330 "compare_and_write": false, 00:03:48.330 "abort": true, 00:03:48.330 "seek_hole": false, 00:03:48.330 "seek_data": false, 00:03:48.330 "copy": true, 00:03:48.330 "nvme_iov_md": false 00:03:48.330 }, 00:03:48.330 "memory_domains": [ 00:03:48.330 { 00:03:48.330 "dma_device_id": "system", 00:03:48.330 "dma_device_type": 1 00:03:48.330 }, 00:03:48.330 { 00:03:48.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.330 "dma_device_type": 2 00:03:48.330 } 00:03:48.330 ], 00:03:48.330 "driver_specific": {} 00:03:48.330 }, 00:03:48.330 { 00:03:48.330 "name": "Passthru0", 00:03:48.330 "aliases": [ 00:03:48.330 "553cb56f-fc41-5940-812c-823e9cd691b1" 00:03:48.330 ], 00:03:48.330 "product_name": "passthru", 00:03:48.330 "block_size": 512, 00:03:48.330 "num_blocks": 16384, 00:03:48.330 "uuid": "553cb56f-fc41-5940-812c-823e9cd691b1", 00:03:48.330 "assigned_rate_limits": { 00:03:48.330 "rw_ios_per_sec": 0, 00:03:48.330 "rw_mbytes_per_sec": 0, 00:03:48.330 "r_mbytes_per_sec": 0, 00:03:48.330 "w_mbytes_per_sec": 0 00:03:48.330 }, 00:03:48.330 "claimed": false, 00:03:48.330 "zoned": false, 00:03:48.330 "supported_io_types": { 00:03:48.330 "read": true, 00:03:48.330 "write": true, 00:03:48.330 "unmap": true, 00:03:48.330 "flush": true, 00:03:48.330 "reset": true, 00:03:48.330 "nvme_admin": false, 00:03:48.330 "nvme_io": false, 00:03:48.330 "nvme_io_md": false, 00:03:48.330 "write_zeroes": true, 00:03:48.330 "zcopy": true, 00:03:48.330 "get_zone_info": false, 00:03:48.330 "zone_management": false, 00:03:48.330 "zone_append": false, 00:03:48.330 "compare": false, 00:03:48.330 "compare_and_write": false, 00:03:48.330 "abort": true, 00:03:48.330 "seek_hole": false, 00:03:48.330 "seek_data": false, 00:03:48.330 "copy": true, 00:03:48.330 "nvme_iov_md": false 00:03:48.330 }, 00:03:48.330 "memory_domains": [ 00:03:48.330 { 00:03:48.330 "dma_device_id": "system", 00:03:48.330 "dma_device_type": 1 00:03:48.330 }, 00:03:48.330 { 00:03:48.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.330 "dma_device_type": 2 00:03:48.330 } 00:03:48.330 ], 00:03:48.330 "driver_specific": { 00:03:48.330 "passthru": { 00:03:48.330 "name": "Passthru0", 00:03:48.330 "base_bdev_name": "Malloc2" 00:03:48.330 } 00:03:48.330 } 00:03:48.330 } 00:03:48.330 ]' 00:03:48.330 23:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.330 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.589 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.589 23:41:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.589 23:41:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:48.589 ************************************ 00:03:48.589 END TEST rpc_daemon_integrity 00:03:48.589 ************************************ 00:03:48.589 23:41:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:48.589 00:03:48.589 real 0m0.235s 00:03:48.589 user 0m0.132s 00:03:48.589 sys 0m0.029s 00:03:48.589 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.589 23:41:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.589 23:41:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:48.589 23:41:21 rpc -- rpc/rpc.sh@84 -- # killprocess 57158 00:03:48.589 23:41:21 rpc -- common/autotest_common.sh@954 -- # '[' -z 57158 ']' 00:03:48.589 23:41:21 rpc -- common/autotest_common.sh@958 -- # kill -0 57158 00:03:48.589 23:41:21 rpc -- common/autotest_common.sh@959 -- # uname 00:03:48.589 23:41:21 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.589 23:41:21 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57158 00:03:48.589 killing process with pid 57158 00:03:48.589 23:41:21 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.589 23:41:21 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.589 23:41:21 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57158' 00:03:48.589 23:41:21 rpc -- common/autotest_common.sh@973 -- # kill 57158 00:03:48.589 23:41:21 rpc -- common/autotest_common.sh@978 -- # wait 57158 00:03:49.972 00:03:49.972 real 0m3.197s 00:03:49.972 user 0m3.604s 00:03:49.972 sys 0m0.575s 00:03:49.972 23:41:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.972 ************************************ 00:03:49.972 END TEST rpc 00:03:49.972 ************************************ 00:03:49.972 23:41:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.972 23:41:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:49.972 23:41:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.972 23:41:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.972 23:41:22 -- common/autotest_common.sh@10 -- # set +x 00:03:49.972 ************************************ 00:03:49.972 START TEST skip_rpc 00:03:49.972 ************************************ 00:03:49.972 23:41:22 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:49.972 * Looking for test storage... 00:03:49.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:49.972 23:41:22 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:49.972 23:41:22 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:49.972 23:41:22 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:49.972 23:41:22 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.972 23:41:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:49.972 23:41:22 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.972 23:41:22 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:49.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.972 --rc genhtml_branch_coverage=1 00:03:49.972 --rc genhtml_function_coverage=1 00:03:49.972 --rc genhtml_legend=1 00:03:49.972 --rc geninfo_all_blocks=1 00:03:49.972 --rc geninfo_unexecuted_blocks=1 00:03:49.972 00:03:49.972 ' 00:03:49.972 23:41:22 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:49.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.973 --rc genhtml_branch_coverage=1 00:03:49.973 --rc genhtml_function_coverage=1 00:03:49.973 --rc genhtml_legend=1 00:03:49.973 --rc geninfo_all_blocks=1 00:03:49.973 --rc geninfo_unexecuted_blocks=1 00:03:49.973 00:03:49.973 ' 00:03:49.973 23:41:22 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:49.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.973 --rc genhtml_branch_coverage=1 00:03:49.973 --rc genhtml_function_coverage=1 00:03:49.973 --rc genhtml_legend=1 00:03:49.973 --rc geninfo_all_blocks=1 00:03:49.973 --rc geninfo_unexecuted_blocks=1 00:03:49.973 00:03:49.973 ' 00:03:49.973 23:41:22 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:49.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.973 --rc genhtml_branch_coverage=1 00:03:49.973 --rc genhtml_function_coverage=1 00:03:49.973 --rc genhtml_legend=1 00:03:49.973 --rc geninfo_all_blocks=1 00:03:49.973 --rc geninfo_unexecuted_blocks=1 00:03:49.973 00:03:49.973 ' 00:03:49.973 23:41:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:49.973 23:41:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:49.973 23:41:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:49.973 23:41:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.973 23:41:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.973 23:41:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.973 ************************************ 00:03:49.973 START TEST skip_rpc 00:03:49.973 ************************************ 00:03:49.973 23:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:49.973 23:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57366 00:03:49.973 23:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.973 23:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:49.973 23:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:49.973 [2024-12-05 23:41:22.642663] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:03:49.973 [2024-12-05 23:41:22.642840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57366 ] 00:03:50.276 [2024-12-05 23:41:22.818241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.276 [2024-12-05 23:41:22.906187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.540 23:41:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:55.540 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57366 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57366 ']' 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57366 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57366 00:03:55.541 killing process with pid 57366 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57366' 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57366 00:03:55.541 23:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57366 00:03:56.127 ************************************ 00:03:56.127 END TEST skip_rpc 00:03:56.127 00:03:56.127 real 0m6.252s 00:03:56.127 user 0m5.850s 00:03:56.127 sys 0m0.290s 00:03:56.127 23:41:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.127 23:41:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.127 ************************************ 00:03:56.127 23:41:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:56.127 23:41:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.127 23:41:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.127 23:41:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.127 ************************************ 00:03:56.127 START TEST skip_rpc_with_json 00:03:56.127 ************************************ 00:03:56.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57459 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57459 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57459 ']' 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:56.127 23:41:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.385 [2024-12-05 23:41:28.897829] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:03:56.385 [2024-12-05 23:41:28.898431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57459 ] 00:03:56.385 [2024-12-05 23:41:29.054820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.644 [2024-12-05 23:41:29.154713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.211 [2024-12-05 23:41:29.745220] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:57.211 request: 00:03:57.211 { 00:03:57.211 "trtype": "tcp", 00:03:57.211 "method": "nvmf_get_transports", 00:03:57.211 "req_id": 1 00:03:57.211 } 00:03:57.211 Got JSON-RPC error response 00:03:57.211 response: 00:03:57.211 { 00:03:57.211 "code": -19, 00:03:57.211 "message": "No such device" 00:03:57.211 } 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.211 [2024-12-05 23:41:29.753324] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.211 23:41:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:57.212 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.212 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.212 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.212 23:41:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:57.212 { 00:03:57.212 "subsystems": [ 00:03:57.212 { 00:03:57.212 "subsystem": "fsdev", 00:03:57.212 "config": [ 00:03:57.212 { 00:03:57.212 "method": "fsdev_set_opts", 00:03:57.212 "params": { 00:03:57.212 "fsdev_io_pool_size": 65535, 00:03:57.212 "fsdev_io_cache_size": 256 00:03:57.212 } 00:03:57.212 } 00:03:57.212 ] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "keyring", 00:03:57.212 "config": [] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "iobuf", 00:03:57.212 "config": [ 00:03:57.212 { 00:03:57.212 "method": "iobuf_set_options", 00:03:57.212 "params": { 00:03:57.212 "small_pool_count": 8192, 00:03:57.212 "large_pool_count": 1024, 00:03:57.212 "small_bufsize": 8192, 00:03:57.212 "large_bufsize": 135168, 00:03:57.212 "enable_numa": false 00:03:57.212 } 00:03:57.212 } 00:03:57.212 ] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "sock", 00:03:57.212 "config": [ 00:03:57.212 { 00:03:57.212 "method": "sock_set_default_impl", 00:03:57.212 "params": { 00:03:57.212 "impl_name": "posix" 00:03:57.212 } 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "method": "sock_impl_set_options", 00:03:57.212 "params": { 00:03:57.212 "impl_name": "ssl", 00:03:57.212 "recv_buf_size": 4096, 00:03:57.212 "send_buf_size": 4096, 00:03:57.212 "enable_recv_pipe": true, 00:03:57.212 "enable_quickack": false, 00:03:57.212 "enable_placement_id": 0, 00:03:57.212 "enable_zerocopy_send_server": true, 00:03:57.212 "enable_zerocopy_send_client": false, 00:03:57.212 "zerocopy_threshold": 0, 00:03:57.212 "tls_version": 0, 00:03:57.212 "enable_ktls": false 00:03:57.212 } 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "method": "sock_impl_set_options", 00:03:57.212 "params": { 00:03:57.212 "impl_name": "posix", 00:03:57.212 "recv_buf_size": 2097152, 00:03:57.212 "send_buf_size": 2097152, 00:03:57.212 "enable_recv_pipe": true, 00:03:57.212 "enable_quickack": false, 00:03:57.212 "enable_placement_id": 0, 00:03:57.212 "enable_zerocopy_send_server": true, 00:03:57.212 "enable_zerocopy_send_client": false, 00:03:57.212 "zerocopy_threshold": 0, 00:03:57.212 "tls_version": 0, 00:03:57.212 "enable_ktls": false 00:03:57.212 } 00:03:57.212 } 00:03:57.212 ] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "vmd", 00:03:57.212 "config": [] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "accel", 00:03:57.212 "config": [ 00:03:57.212 { 00:03:57.212 "method": "accel_set_options", 00:03:57.212 "params": { 00:03:57.212 "small_cache_size": 128, 00:03:57.212 "large_cache_size": 16, 00:03:57.212 "task_count": 2048, 00:03:57.212 "sequence_count": 2048, 00:03:57.212 "buf_count": 2048 00:03:57.212 } 00:03:57.212 } 00:03:57.212 ] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "bdev", 00:03:57.212 "config": [ 00:03:57.212 { 00:03:57.212 "method": "bdev_set_options", 00:03:57.212 "params": { 00:03:57.212 "bdev_io_pool_size": 65535, 00:03:57.212 "bdev_io_cache_size": 256, 00:03:57.212 "bdev_auto_examine": true, 00:03:57.212 "iobuf_small_cache_size": 128, 00:03:57.212 "iobuf_large_cache_size": 16 00:03:57.212 } 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "method": "bdev_raid_set_options", 00:03:57.212 "params": { 00:03:57.212 "process_window_size_kb": 1024, 00:03:57.212 "process_max_bandwidth_mb_sec": 0 00:03:57.212 } 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "method": "bdev_iscsi_set_options", 00:03:57.212 "params": { 00:03:57.212 "timeout_sec": 30 00:03:57.212 } 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "method": "bdev_nvme_set_options", 00:03:57.212 "params": { 00:03:57.212 "action_on_timeout": "none", 00:03:57.212 "timeout_us": 0, 00:03:57.212 "timeout_admin_us": 0, 00:03:57.212 "keep_alive_timeout_ms": 10000, 00:03:57.212 "arbitration_burst": 0, 00:03:57.212 "low_priority_weight": 0, 00:03:57.212 "medium_priority_weight": 0, 00:03:57.212 "high_priority_weight": 0, 00:03:57.212 "nvme_adminq_poll_period_us": 10000, 00:03:57.212 "nvme_ioq_poll_period_us": 0, 00:03:57.212 "io_queue_requests": 0, 00:03:57.212 "delay_cmd_submit": true, 00:03:57.212 "transport_retry_count": 4, 00:03:57.212 "bdev_retry_count": 3, 00:03:57.212 "transport_ack_timeout": 0, 00:03:57.212 "ctrlr_loss_timeout_sec": 0, 00:03:57.212 "reconnect_delay_sec": 0, 00:03:57.212 "fast_io_fail_timeout_sec": 0, 00:03:57.212 "disable_auto_failback": false, 00:03:57.212 "generate_uuids": false, 00:03:57.212 "transport_tos": 0, 00:03:57.212 "nvme_error_stat": false, 00:03:57.212 "rdma_srq_size": 0, 00:03:57.212 "io_path_stat": false, 00:03:57.212 "allow_accel_sequence": false, 00:03:57.212 "rdma_max_cq_size": 0, 00:03:57.212 "rdma_cm_event_timeout_ms": 0, 00:03:57.212 "dhchap_digests": [ 00:03:57.212 "sha256", 00:03:57.212 "sha384", 00:03:57.212 "sha512" 00:03:57.212 ], 00:03:57.212 "dhchap_dhgroups": [ 00:03:57.212 "null", 00:03:57.212 "ffdhe2048", 00:03:57.212 "ffdhe3072", 00:03:57.212 "ffdhe4096", 00:03:57.212 "ffdhe6144", 00:03:57.212 "ffdhe8192" 00:03:57.212 ] 00:03:57.212 } 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "method": "bdev_nvme_set_hotplug", 00:03:57.212 "params": { 00:03:57.212 "period_us": 100000, 00:03:57.212 "enable": false 00:03:57.212 } 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "method": "bdev_wait_for_examine" 00:03:57.212 } 00:03:57.212 ] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "scsi", 00:03:57.212 "config": null 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "scheduler", 00:03:57.212 "config": [ 00:03:57.212 { 00:03:57.212 "method": "framework_set_scheduler", 00:03:57.212 "params": { 00:03:57.212 "name": "static" 00:03:57.212 } 00:03:57.212 } 00:03:57.212 ] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "vhost_scsi", 00:03:57.212 "config": [] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "vhost_blk", 00:03:57.212 "config": [] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "ublk", 00:03:57.212 "config": [] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "nbd", 00:03:57.212 "config": [] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "nvmf", 00:03:57.212 "config": [ 00:03:57.212 { 00:03:57.212 "method": "nvmf_set_config", 00:03:57.212 "params": { 00:03:57.212 "discovery_filter": "match_any", 00:03:57.212 "admin_cmd_passthru": { 00:03:57.212 "identify_ctrlr": false 00:03:57.212 }, 00:03:57.212 "dhchap_digests": [ 00:03:57.212 "sha256", 00:03:57.212 "sha384", 00:03:57.212 "sha512" 00:03:57.212 ], 00:03:57.212 "dhchap_dhgroups": [ 00:03:57.212 "null", 00:03:57.212 "ffdhe2048", 00:03:57.212 "ffdhe3072", 00:03:57.212 "ffdhe4096", 00:03:57.212 "ffdhe6144", 00:03:57.212 "ffdhe8192" 00:03:57.212 ] 00:03:57.212 } 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "method": "nvmf_set_max_subsystems", 00:03:57.212 "params": { 00:03:57.212 "max_subsystems": 1024 00:03:57.212 } 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "method": "nvmf_set_crdt", 00:03:57.212 "params": { 00:03:57.212 "crdt1": 0, 00:03:57.212 "crdt2": 0, 00:03:57.212 "crdt3": 0 00:03:57.212 } 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "method": "nvmf_create_transport", 00:03:57.212 "params": { 00:03:57.212 "trtype": "TCP", 00:03:57.212 "max_queue_depth": 128, 00:03:57.212 "max_io_qpairs_per_ctrlr": 127, 00:03:57.212 "in_capsule_data_size": 4096, 00:03:57.212 "max_io_size": 131072, 00:03:57.212 "io_unit_size": 131072, 00:03:57.212 "max_aq_depth": 128, 00:03:57.212 "num_shared_buffers": 511, 00:03:57.212 "buf_cache_size": 4294967295, 00:03:57.212 "dif_insert_or_strip": false, 00:03:57.212 "zcopy": false, 00:03:57.212 "c2h_success": true, 00:03:57.212 "sock_priority": 0, 00:03:57.212 "abort_timeout_sec": 1, 00:03:57.212 "ack_timeout": 0, 00:03:57.212 "data_wr_pool_size": 0 00:03:57.212 } 00:03:57.212 } 00:03:57.212 ] 00:03:57.212 }, 00:03:57.212 { 00:03:57.212 "subsystem": "iscsi", 00:03:57.212 "config": [ 00:03:57.212 { 00:03:57.212 "method": "iscsi_set_options", 00:03:57.212 "params": { 00:03:57.212 "node_base": "iqn.2016-06.io.spdk", 00:03:57.212 "max_sessions": 128, 00:03:57.212 "max_connections_per_session": 2, 00:03:57.212 "max_queue_depth": 64, 00:03:57.212 "default_time2wait": 2, 00:03:57.212 "default_time2retain": 20, 00:03:57.212 "first_burst_length": 8192, 00:03:57.212 "immediate_data": true, 00:03:57.212 "allow_duplicated_isid": false, 00:03:57.213 "error_recovery_level": 0, 00:03:57.213 "nop_timeout": 60, 00:03:57.213 "nop_in_interval": 30, 00:03:57.213 "disable_chap": false, 00:03:57.213 "require_chap": false, 00:03:57.213 "mutual_chap": false, 00:03:57.213 "chap_group": 0, 00:03:57.213 "max_large_datain_per_connection": 64, 00:03:57.213 "max_r2t_per_connection": 4, 00:03:57.213 "pdu_pool_size": 36864, 00:03:57.213 "immediate_data_pool_size": 16384, 00:03:57.213 "data_out_pool_size": 2048 00:03:57.213 } 00:03:57.213 } 00:03:57.213 ] 00:03:57.213 } 00:03:57.213 ] 00:03:57.213 } 00:03:57.213 23:41:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:57.213 23:41:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57459 00:03:57.213 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57459 ']' 00:03:57.213 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57459 00:03:57.213 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:57.213 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.213 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57459 00:03:57.471 killing process with pid 57459 00:03:57.471 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.471 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.471 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57459' 00:03:57.471 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57459 00:03:57.471 23:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57459 00:03:58.844 23:41:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57499 00:03:58.844 23:41:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:58.844 23:41:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:04.182 23:41:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57499 00:04:04.183 23:41:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57499 ']' 00:04:04.183 23:41:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57499 00:04:04.183 23:41:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:04.183 23:41:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.183 23:41:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57499 00:04:04.183 killing process with pid 57499 00:04:04.183 23:41:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.183 23:41:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.183 23:41:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57499' 00:04:04.183 23:41:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57499 00:04:04.183 23:41:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57499 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:05.557 ************************************ 00:04:05.557 END TEST skip_rpc_with_json 00:04:05.557 ************************************ 00:04:05.557 00:04:05.557 real 0m9.258s 00:04:05.557 user 0m8.851s 00:04:05.557 sys 0m0.625s 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.557 23:41:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:05.557 23:41:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.557 23:41:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.557 23:41:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.557 ************************************ 00:04:05.557 START TEST skip_rpc_with_delay 00:04:05.557 ************************************ 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:05.557 [2024-12-05 23:41:38.211789] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:05.557 00:04:05.557 real 0m0.137s 00:04:05.557 user 0m0.067s 00:04:05.557 sys 0m0.068s 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.557 23:41:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:05.557 ************************************ 00:04:05.557 END TEST skip_rpc_with_delay 00:04:05.557 ************************************ 00:04:05.816 23:41:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:05.816 23:41:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:05.816 23:41:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:05.816 23:41:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.816 23:41:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.816 23:41:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.816 ************************************ 00:04:05.816 START TEST exit_on_failed_rpc_init 00:04:05.816 ************************************ 00:04:05.816 23:41:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:05.816 23:41:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57626 00:04:05.816 23:41:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57626 00:04:05.816 23:41:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57626 ']' 00:04:05.816 23:41:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.816 23:41:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:05.816 23:41:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.816 23:41:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.816 23:41:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.816 23:41:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.816 [2024-12-05 23:41:38.390240] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:05.816 [2024-12-05 23:41:38.390369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57626 ] 00:04:06.074 [2024-12-05 23:41:38.548434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.074 [2024-12-05 23:41:38.665498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:06.641 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.899 [2024-12-05 23:41:39.404647] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:06.900 [2024-12-05 23:41:39.404776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57644 ] 00:04:06.900 [2024-12-05 23:41:39.563994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.163 [2024-12-05 23:41:39.683419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.163 [2024-12-05 23:41:39.683521] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:07.163 [2024-12-05 23:41:39.683536] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:07.163 [2024-12-05 23:41:39.683552] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57626 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57626 ']' 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57626 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57626 00:04:07.421 killing process with pid 57626 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57626' 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57626 00:04:07.421 23:41:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57626 00:04:08.870 ************************************ 00:04:08.870 END TEST exit_on_failed_rpc_init 00:04:08.870 ************************************ 00:04:08.870 00:04:08.870 real 0m3.007s 00:04:08.870 user 0m3.281s 00:04:08.870 sys 0m0.474s 00:04:08.870 23:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.870 23:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:08.870 23:41:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:08.870 00:04:08.870 real 0m18.973s 00:04:08.870 user 0m18.187s 00:04:08.870 sys 0m1.628s 00:04:08.870 23:41:41 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.870 ************************************ 00:04:08.870 END TEST skip_rpc 00:04:08.870 ************************************ 00:04:08.870 23:41:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.870 23:41:41 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:08.870 23:41:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.870 23:41:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.870 23:41:41 -- common/autotest_common.sh@10 -- # set +x 00:04:08.870 ************************************ 00:04:08.870 START TEST rpc_client 00:04:08.870 ************************************ 00:04:08.870 23:41:41 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:08.870 * Looking for test storage... 00:04:08.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:08.870 23:41:41 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:08.870 23:41:41 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:08.870 23:41:41 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:08.870 23:41:41 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.870 23:41:41 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:08.870 23:41:41 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.870 23:41:41 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.870 --rc genhtml_branch_coverage=1 00:04:08.870 --rc genhtml_function_coverage=1 00:04:08.870 --rc genhtml_legend=1 00:04:08.870 --rc geninfo_all_blocks=1 00:04:08.870 --rc geninfo_unexecuted_blocks=1 00:04:08.870 00:04:08.870 ' 00:04:08.870 23:41:41 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.870 --rc genhtml_branch_coverage=1 00:04:08.870 --rc genhtml_function_coverage=1 00:04:08.870 --rc genhtml_legend=1 00:04:08.870 --rc geninfo_all_blocks=1 00:04:08.870 --rc geninfo_unexecuted_blocks=1 00:04:08.870 00:04:08.870 ' 00:04:08.870 23:41:41 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.870 --rc genhtml_branch_coverage=1 00:04:08.870 --rc genhtml_function_coverage=1 00:04:08.870 --rc genhtml_legend=1 00:04:08.870 --rc geninfo_all_blocks=1 00:04:08.870 --rc geninfo_unexecuted_blocks=1 00:04:08.870 00:04:08.870 ' 00:04:08.870 23:41:41 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.870 --rc genhtml_branch_coverage=1 00:04:08.870 --rc genhtml_function_coverage=1 00:04:08.870 --rc genhtml_legend=1 00:04:08.870 --rc geninfo_all_blocks=1 00:04:08.870 --rc geninfo_unexecuted_blocks=1 00:04:08.870 00:04:08.870 ' 00:04:08.870 23:41:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:09.156 OK 00:04:09.156 23:41:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:09.156 00:04:09.156 real 0m0.208s 00:04:09.156 user 0m0.116s 00:04:09.156 sys 0m0.096s 00:04:09.156 23:41:41 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.156 ************************************ 00:04:09.156 END TEST rpc_client 00:04:09.156 ************************************ 00:04:09.156 23:41:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:09.156 23:41:41 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:09.156 23:41:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.156 23:41:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.156 23:41:41 -- common/autotest_common.sh@10 -- # set +x 00:04:09.156 ************************************ 00:04:09.156 START TEST json_config 00:04:09.156 ************************************ 00:04:09.156 23:41:41 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:09.156 23:41:41 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:09.156 23:41:41 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:09.156 23:41:41 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:09.156 23:41:41 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:09.156 23:41:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.156 23:41:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.156 23:41:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.156 23:41:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.156 23:41:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.156 23:41:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.156 23:41:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.156 23:41:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.156 23:41:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.156 23:41:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.156 23:41:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.156 23:41:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:09.156 23:41:41 json_config -- scripts/common.sh@345 -- # : 1 00:04:09.156 23:41:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.156 23:41:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.156 23:41:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:09.156 23:41:41 json_config -- scripts/common.sh@353 -- # local d=1 00:04:09.156 23:41:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.156 23:41:41 json_config -- scripts/common.sh@355 -- # echo 1 00:04:09.156 23:41:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.156 23:41:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:09.156 23:41:41 json_config -- scripts/common.sh@353 -- # local d=2 00:04:09.156 23:41:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.156 23:41:41 json_config -- scripts/common.sh@355 -- # echo 2 00:04:09.156 23:41:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.156 23:41:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.156 23:41:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.156 23:41:41 json_config -- scripts/common.sh@368 -- # return 0 00:04:09.156 23:41:41 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.156 23:41:41 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:09.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.156 --rc genhtml_branch_coverage=1 00:04:09.156 --rc genhtml_function_coverage=1 00:04:09.156 --rc genhtml_legend=1 00:04:09.156 --rc geninfo_all_blocks=1 00:04:09.156 --rc geninfo_unexecuted_blocks=1 00:04:09.156 00:04:09.156 ' 00:04:09.156 23:41:41 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:09.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.156 --rc genhtml_branch_coverage=1 00:04:09.156 --rc genhtml_function_coverage=1 00:04:09.156 --rc genhtml_legend=1 00:04:09.156 --rc geninfo_all_blocks=1 00:04:09.156 --rc geninfo_unexecuted_blocks=1 00:04:09.156 00:04:09.156 ' 00:04:09.156 23:41:41 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:09.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.156 --rc genhtml_branch_coverage=1 00:04:09.156 --rc genhtml_function_coverage=1 00:04:09.156 --rc genhtml_legend=1 00:04:09.156 --rc geninfo_all_blocks=1 00:04:09.156 --rc geninfo_unexecuted_blocks=1 00:04:09.156 00:04:09.156 ' 00:04:09.156 23:41:41 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:09.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.156 --rc genhtml_branch_coverage=1 00:04:09.156 --rc genhtml_function_coverage=1 00:04:09.156 --rc genhtml_legend=1 00:04:09.156 --rc geninfo_all_blocks=1 00:04:09.156 --rc geninfo_unexecuted_blocks=1 00:04:09.156 00:04:09.156 ' 00:04:09.156 23:41:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b8235fc-8e64-42c3-b925-f7853c7e59dc 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8b8235fc-8e64-42c3-b925-f7853c7e59dc 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:09.156 23:41:41 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:09.156 23:41:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:09.156 23:41:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.156 23:41:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.156 23:41:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.156 23:41:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.156 23:41:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.157 23:41:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.157 23:41:41 json_config -- paths/export.sh@5 -- # export PATH 00:04:09.157 23:41:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.157 23:41:41 json_config -- nvmf/common.sh@51 -- # : 0 00:04:09.157 23:41:41 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:09.157 23:41:41 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:09.157 23:41:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:09.157 23:41:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.157 23:41:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.157 23:41:41 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:09.157 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:09.157 23:41:41 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:09.157 23:41:41 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:09.157 23:41:41 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:09.157 23:41:41 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:09.157 23:41:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:09.157 23:41:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:09.157 23:41:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:09.157 23:41:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:09.157 23:41:41 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:09.157 WARNING: No tests are enabled so not running JSON configuration tests 00:04:09.157 23:41:41 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:09.157 ************************************ 00:04:09.157 END TEST json_config 00:04:09.157 ************************************ 00:04:09.157 00:04:09.157 real 0m0.145s 00:04:09.157 user 0m0.090s 00:04:09.157 sys 0m0.054s 00:04:09.157 23:41:41 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.157 23:41:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.417 23:41:41 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:09.417 23:41:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.417 23:41:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.417 23:41:41 -- common/autotest_common.sh@10 -- # set +x 00:04:09.417 ************************************ 00:04:09.417 START TEST json_config_extra_key 00:04:09.417 ************************************ 00:04:09.417 23:41:41 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:09.417 23:41:41 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:09.417 23:41:41 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:09.417 23:41:41 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:09.417 23:41:41 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.417 23:41:41 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:09.418 23:41:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:09.418 23:41:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.418 23:41:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:09.418 23:41:41 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.418 23:41:41 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.418 23:41:41 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.418 23:41:41 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:09.418 23:41:41 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.418 23:41:41 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:09.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.418 --rc genhtml_branch_coverage=1 00:04:09.418 --rc genhtml_function_coverage=1 00:04:09.418 --rc genhtml_legend=1 00:04:09.418 --rc geninfo_all_blocks=1 00:04:09.418 --rc geninfo_unexecuted_blocks=1 00:04:09.418 00:04:09.418 ' 00:04:09.418 23:41:41 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:09.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.418 --rc genhtml_branch_coverage=1 00:04:09.418 --rc genhtml_function_coverage=1 00:04:09.418 --rc genhtml_legend=1 00:04:09.418 --rc geninfo_all_blocks=1 00:04:09.418 --rc geninfo_unexecuted_blocks=1 00:04:09.418 00:04:09.418 ' 00:04:09.418 23:41:41 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:09.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.418 --rc genhtml_branch_coverage=1 00:04:09.418 --rc genhtml_function_coverage=1 00:04:09.418 --rc genhtml_legend=1 00:04:09.418 --rc geninfo_all_blocks=1 00:04:09.418 --rc geninfo_unexecuted_blocks=1 00:04:09.418 00:04:09.418 ' 00:04:09.418 23:41:41 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:09.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.418 --rc genhtml_branch_coverage=1 00:04:09.418 --rc genhtml_function_coverage=1 00:04:09.418 --rc genhtml_legend=1 00:04:09.418 --rc geninfo_all_blocks=1 00:04:09.418 --rc geninfo_unexecuted_blocks=1 00:04:09.418 00:04:09.418 ' 00:04:09.418 23:41:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.418 23:41:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b8235fc-8e64-42c3-b925-f7853c7e59dc 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8b8235fc-8e64-42c3-b925-f7853c7e59dc 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:09.418 23:41:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:09.418 23:41:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.418 23:41:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.418 23:41:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.418 23:41:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.418 23:41:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.418 23:41:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.418 23:41:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:09.418 23:41:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:09.418 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:09.418 23:41:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:09.418 INFO: launching applications... 00:04:09.418 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57838 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:09.418 Waiting for target to run... 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57838 /var/tmp/spdk_tgt.sock 00:04:09.418 23:41:42 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:09.418 23:41:42 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57838 ']' 00:04:09.418 23:41:42 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:09.418 23:41:42 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.418 23:41:42 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:09.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:09.418 23:41:42 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.418 23:41:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:09.418 [2024-12-05 23:41:42.088096] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:09.419 [2024-12-05 23:41:42.088387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57838 ] 00:04:09.985 [2024-12-05 23:41:42.408091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.985 [2024-12-05 23:41:42.501225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.550 23:41:42 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.550 23:41:42 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:10.550 23:41:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:10.550 00:04:10.550 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:10.550 INFO: shutting down applications... 00:04:10.550 23:41:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:10.550 23:41:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:10.550 23:41:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:10.550 23:41:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57838 ]] 00:04:10.550 23:41:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57838 00:04:10.550 23:41:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:10.550 23:41:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.550 23:41:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57838 00:04:10.550 23:41:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.806 23:41:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.806 23:41:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.806 23:41:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57838 00:04:10.806 23:41:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.372 23:41:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.372 23:41:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.372 23:41:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57838 00:04:11.372 23:41:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.937 23:41:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.937 23:41:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.938 23:41:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57838 00:04:11.938 23:41:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:12.503 23:41:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:12.503 23:41:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.504 23:41:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57838 00:04:12.504 23:41:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:12.504 23:41:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:12.504 23:41:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:12.504 SPDK target shutdown done 00:04:12.504 23:41:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:12.504 Success 00:04:12.504 23:41:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:12.504 00:04:12.504 real 0m3.143s 00:04:12.504 user 0m2.692s 00:04:12.504 sys 0m0.401s 00:04:12.504 ************************************ 00:04:12.504 END TEST json_config_extra_key 00:04:12.504 ************************************ 00:04:12.504 23:41:45 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.504 23:41:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:12.504 23:41:45 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.504 23:41:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.504 23:41:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.504 23:41:45 -- common/autotest_common.sh@10 -- # set +x 00:04:12.504 ************************************ 00:04:12.504 START TEST alias_rpc 00:04:12.504 ************************************ 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.504 * Looking for test storage... 00:04:12.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.504 23:41:45 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.504 --rc genhtml_branch_coverage=1 00:04:12.504 --rc genhtml_function_coverage=1 00:04:12.504 --rc genhtml_legend=1 00:04:12.504 --rc geninfo_all_blocks=1 00:04:12.504 --rc geninfo_unexecuted_blocks=1 00:04:12.504 00:04:12.504 ' 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.504 --rc genhtml_branch_coverage=1 00:04:12.504 --rc genhtml_function_coverage=1 00:04:12.504 --rc genhtml_legend=1 00:04:12.504 --rc geninfo_all_blocks=1 00:04:12.504 --rc geninfo_unexecuted_blocks=1 00:04:12.504 00:04:12.504 ' 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.504 --rc genhtml_branch_coverage=1 00:04:12.504 --rc genhtml_function_coverage=1 00:04:12.504 --rc genhtml_legend=1 00:04:12.504 --rc geninfo_all_blocks=1 00:04:12.504 --rc geninfo_unexecuted_blocks=1 00:04:12.504 00:04:12.504 ' 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.504 --rc genhtml_branch_coverage=1 00:04:12.504 --rc genhtml_function_coverage=1 00:04:12.504 --rc genhtml_legend=1 00:04:12.504 --rc geninfo_all_blocks=1 00:04:12.504 --rc geninfo_unexecuted_blocks=1 00:04:12.504 00:04:12.504 ' 00:04:12.504 23:41:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:12.504 23:41:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57931 00:04:12.504 23:41:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57931 00:04:12.504 23:41:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57931 ']' 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.504 23:41:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.762 [2024-12-05 23:41:45.267258] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:12.762 [2024-12-05 23:41:45.267903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57931 ] 00:04:12.762 [2024-12-05 23:41:45.428580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.021 [2024-12-05 23:41:45.527191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.589 23:41:46 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.589 23:41:46 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.590 23:41:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:13.848 23:41:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57931 00:04:13.848 23:41:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57931 ']' 00:04:13.848 23:41:46 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57931 00:04:13.848 23:41:46 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:13.848 23:41:46 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.848 23:41:46 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57931 00:04:13.848 23:41:46 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.848 23:41:46 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.848 23:41:46 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57931' 00:04:13.848 killing process with pid 57931 00:04:13.848 23:41:46 alias_rpc -- common/autotest_common.sh@973 -- # kill 57931 00:04:13.848 23:41:46 alias_rpc -- common/autotest_common.sh@978 -- # wait 57931 00:04:15.295 00:04:15.295 real 0m2.766s 00:04:15.295 user 0m2.866s 00:04:15.295 sys 0m0.387s 00:04:15.295 ************************************ 00:04:15.295 END TEST alias_rpc 00:04:15.295 ************************************ 00:04:15.295 23:41:47 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.295 23:41:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.295 23:41:47 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:15.295 23:41:47 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:15.295 23:41:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.295 23:41:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.295 23:41:47 -- common/autotest_common.sh@10 -- # set +x 00:04:15.295 ************************************ 00:04:15.295 START TEST spdkcli_tcp 00:04:15.295 ************************************ 00:04:15.295 23:41:47 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:15.295 * Looking for test storage... 00:04:15.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:15.295 23:41:47 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.295 23:41:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.295 23:41:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.295 23:41:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.295 23:41:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:15.295 23:41:47 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.295 23:41:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.295 --rc genhtml_branch_coverage=1 00:04:15.295 --rc genhtml_function_coverage=1 00:04:15.295 --rc genhtml_legend=1 00:04:15.295 --rc geninfo_all_blocks=1 00:04:15.295 --rc geninfo_unexecuted_blocks=1 00:04:15.295 00:04:15.295 ' 00:04:15.295 23:41:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.295 --rc genhtml_branch_coverage=1 00:04:15.295 --rc genhtml_function_coverage=1 00:04:15.295 --rc genhtml_legend=1 00:04:15.295 --rc geninfo_all_blocks=1 00:04:15.295 --rc geninfo_unexecuted_blocks=1 00:04:15.295 00:04:15.295 ' 00:04:15.295 23:41:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.295 --rc genhtml_branch_coverage=1 00:04:15.295 --rc genhtml_function_coverage=1 00:04:15.295 --rc genhtml_legend=1 00:04:15.295 --rc geninfo_all_blocks=1 00:04:15.295 --rc geninfo_unexecuted_blocks=1 00:04:15.295 00:04:15.295 ' 00:04:15.295 23:41:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.295 --rc genhtml_branch_coverage=1 00:04:15.295 --rc genhtml_function_coverage=1 00:04:15.295 --rc genhtml_legend=1 00:04:15.295 --rc geninfo_all_blocks=1 00:04:15.295 --rc geninfo_unexecuted_blocks=1 00:04:15.295 00:04:15.295 ' 00:04:15.295 23:41:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:15.295 23:41:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:15.295 23:41:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:15.296 23:41:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:15.296 23:41:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:15.296 23:41:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:15.296 23:41:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:15.296 23:41:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.296 23:41:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.296 23:41:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58022 00:04:15.296 23:41:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58022 00:04:15.296 23:41:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58022 ']' 00:04:15.296 23:41:48 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.296 23:41:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.296 23:41:48 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.296 23:41:48 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.296 23:41:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:15.296 23:41:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.553 [2024-12-05 23:41:48.076657] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:15.553 [2024-12-05 23:41:48.077228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58022 ] 00:04:15.553 [2024-12-05 23:41:48.238957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.811 [2024-12-05 23:41:48.341519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.811 [2024-12-05 23:41:48.341666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.376 23:41:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.376 23:41:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:16.376 23:41:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58039 00:04:16.376 23:41:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:16.376 23:41:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:16.634 [ 00:04:16.634 "bdev_malloc_delete", 00:04:16.634 "bdev_malloc_create", 00:04:16.634 "bdev_null_resize", 00:04:16.634 "bdev_null_delete", 00:04:16.634 "bdev_null_create", 00:04:16.634 "bdev_nvme_cuse_unregister", 00:04:16.634 "bdev_nvme_cuse_register", 00:04:16.634 "bdev_opal_new_user", 00:04:16.634 "bdev_opal_set_lock_state", 00:04:16.634 "bdev_opal_delete", 00:04:16.634 "bdev_opal_get_info", 00:04:16.634 "bdev_opal_create", 00:04:16.634 "bdev_nvme_opal_revert", 00:04:16.634 "bdev_nvme_opal_init", 00:04:16.634 "bdev_nvme_send_cmd", 00:04:16.634 "bdev_nvme_set_keys", 00:04:16.634 "bdev_nvme_get_path_iostat", 00:04:16.634 "bdev_nvme_get_mdns_discovery_info", 00:04:16.634 "bdev_nvme_stop_mdns_discovery", 00:04:16.634 "bdev_nvme_start_mdns_discovery", 00:04:16.634 "bdev_nvme_set_multipath_policy", 00:04:16.634 "bdev_nvme_set_preferred_path", 00:04:16.634 "bdev_nvme_get_io_paths", 00:04:16.634 "bdev_nvme_remove_error_injection", 00:04:16.634 "bdev_nvme_add_error_injection", 00:04:16.634 "bdev_nvme_get_discovery_info", 00:04:16.634 "bdev_nvme_stop_discovery", 00:04:16.634 "bdev_nvme_start_discovery", 00:04:16.634 "bdev_nvme_get_controller_health_info", 00:04:16.634 "bdev_nvme_disable_controller", 00:04:16.634 "bdev_nvme_enable_controller", 00:04:16.634 "bdev_nvme_reset_controller", 00:04:16.634 "bdev_nvme_get_transport_statistics", 00:04:16.634 "bdev_nvme_apply_firmware", 00:04:16.634 "bdev_nvme_detach_controller", 00:04:16.634 "bdev_nvme_get_controllers", 00:04:16.634 "bdev_nvme_attach_controller", 00:04:16.634 "bdev_nvme_set_hotplug", 00:04:16.634 "bdev_nvme_set_options", 00:04:16.634 "bdev_passthru_delete", 00:04:16.634 "bdev_passthru_create", 00:04:16.635 "bdev_lvol_set_parent_bdev", 00:04:16.635 "bdev_lvol_set_parent", 00:04:16.635 "bdev_lvol_check_shallow_copy", 00:04:16.635 "bdev_lvol_start_shallow_copy", 00:04:16.635 "bdev_lvol_grow_lvstore", 00:04:16.635 "bdev_lvol_get_lvols", 00:04:16.635 "bdev_lvol_get_lvstores", 00:04:16.635 "bdev_lvol_delete", 00:04:16.635 "bdev_lvol_set_read_only", 00:04:16.635 "bdev_lvol_resize", 00:04:16.635 "bdev_lvol_decouple_parent", 00:04:16.635 "bdev_lvol_inflate", 00:04:16.635 "bdev_lvol_rename", 00:04:16.635 "bdev_lvol_clone_bdev", 00:04:16.635 "bdev_lvol_clone", 00:04:16.635 "bdev_lvol_snapshot", 00:04:16.635 "bdev_lvol_create", 00:04:16.635 "bdev_lvol_delete_lvstore", 00:04:16.635 "bdev_lvol_rename_lvstore", 00:04:16.635 "bdev_lvol_create_lvstore", 00:04:16.635 "bdev_raid_set_options", 00:04:16.635 "bdev_raid_remove_base_bdev", 00:04:16.635 "bdev_raid_add_base_bdev", 00:04:16.635 "bdev_raid_delete", 00:04:16.635 "bdev_raid_create", 00:04:16.635 "bdev_raid_get_bdevs", 00:04:16.635 "bdev_error_inject_error", 00:04:16.635 "bdev_error_delete", 00:04:16.635 "bdev_error_create", 00:04:16.635 "bdev_split_delete", 00:04:16.635 "bdev_split_create", 00:04:16.635 "bdev_delay_delete", 00:04:16.635 "bdev_delay_create", 00:04:16.635 "bdev_delay_update_latency", 00:04:16.635 "bdev_zone_block_delete", 00:04:16.635 "bdev_zone_block_create", 00:04:16.635 "blobfs_create", 00:04:16.635 "blobfs_detect", 00:04:16.635 "blobfs_set_cache_size", 00:04:16.635 "bdev_xnvme_delete", 00:04:16.635 "bdev_xnvme_create", 00:04:16.635 "bdev_aio_delete", 00:04:16.635 "bdev_aio_rescan", 00:04:16.635 "bdev_aio_create", 00:04:16.635 "bdev_ftl_set_property", 00:04:16.635 "bdev_ftl_get_properties", 00:04:16.635 "bdev_ftl_get_stats", 00:04:16.635 "bdev_ftl_unmap", 00:04:16.635 "bdev_ftl_unload", 00:04:16.635 "bdev_ftl_delete", 00:04:16.635 "bdev_ftl_load", 00:04:16.635 "bdev_ftl_create", 00:04:16.635 "bdev_virtio_attach_controller", 00:04:16.635 "bdev_virtio_scsi_get_devices", 00:04:16.635 "bdev_virtio_detach_controller", 00:04:16.635 "bdev_virtio_blk_set_hotplug", 00:04:16.635 "bdev_iscsi_delete", 00:04:16.635 "bdev_iscsi_create", 00:04:16.635 "bdev_iscsi_set_options", 00:04:16.635 "accel_error_inject_error", 00:04:16.635 "ioat_scan_accel_module", 00:04:16.635 "dsa_scan_accel_module", 00:04:16.635 "iaa_scan_accel_module", 00:04:16.635 "keyring_file_remove_key", 00:04:16.635 "keyring_file_add_key", 00:04:16.635 "keyring_linux_set_options", 00:04:16.635 "fsdev_aio_delete", 00:04:16.635 "fsdev_aio_create", 00:04:16.635 "iscsi_get_histogram", 00:04:16.635 "iscsi_enable_histogram", 00:04:16.635 "iscsi_set_options", 00:04:16.635 "iscsi_get_auth_groups", 00:04:16.635 "iscsi_auth_group_remove_secret", 00:04:16.635 "iscsi_auth_group_add_secret", 00:04:16.635 "iscsi_delete_auth_group", 00:04:16.635 "iscsi_create_auth_group", 00:04:16.635 "iscsi_set_discovery_auth", 00:04:16.635 "iscsi_get_options", 00:04:16.635 "iscsi_target_node_request_logout", 00:04:16.635 "iscsi_target_node_set_redirect", 00:04:16.635 "iscsi_target_node_set_auth", 00:04:16.635 "iscsi_target_node_add_lun", 00:04:16.635 "iscsi_get_stats", 00:04:16.635 "iscsi_get_connections", 00:04:16.635 "iscsi_portal_group_set_auth", 00:04:16.635 "iscsi_start_portal_group", 00:04:16.635 "iscsi_delete_portal_group", 00:04:16.635 "iscsi_create_portal_group", 00:04:16.635 "iscsi_get_portal_groups", 00:04:16.635 "iscsi_delete_target_node", 00:04:16.635 "iscsi_target_node_remove_pg_ig_maps", 00:04:16.635 "iscsi_target_node_add_pg_ig_maps", 00:04:16.635 "iscsi_create_target_node", 00:04:16.635 "iscsi_get_target_nodes", 00:04:16.635 "iscsi_delete_initiator_group", 00:04:16.635 "iscsi_initiator_group_remove_initiators", 00:04:16.635 "iscsi_initiator_group_add_initiators", 00:04:16.635 "iscsi_create_initiator_group", 00:04:16.635 "iscsi_get_initiator_groups", 00:04:16.635 "nvmf_set_crdt", 00:04:16.635 "nvmf_set_config", 00:04:16.635 "nvmf_set_max_subsystems", 00:04:16.635 "nvmf_stop_mdns_prr", 00:04:16.635 "nvmf_publish_mdns_prr", 00:04:16.635 "nvmf_subsystem_get_listeners", 00:04:16.635 "nvmf_subsystem_get_qpairs", 00:04:16.635 "nvmf_subsystem_get_controllers", 00:04:16.635 "nvmf_get_stats", 00:04:16.635 "nvmf_get_transports", 00:04:16.635 "nvmf_create_transport", 00:04:16.635 "nvmf_get_targets", 00:04:16.635 "nvmf_delete_target", 00:04:16.635 "nvmf_create_target", 00:04:16.635 "nvmf_subsystem_allow_any_host", 00:04:16.635 "nvmf_subsystem_set_keys", 00:04:16.635 "nvmf_subsystem_remove_host", 00:04:16.635 "nvmf_subsystem_add_host", 00:04:16.635 "nvmf_ns_remove_host", 00:04:16.635 "nvmf_ns_add_host", 00:04:16.635 "nvmf_subsystem_remove_ns", 00:04:16.635 "nvmf_subsystem_set_ns_ana_group", 00:04:16.635 "nvmf_subsystem_add_ns", 00:04:16.635 "nvmf_subsystem_listener_set_ana_state", 00:04:16.635 "nvmf_discovery_get_referrals", 00:04:16.635 "nvmf_discovery_remove_referral", 00:04:16.635 "nvmf_discovery_add_referral", 00:04:16.635 "nvmf_subsystem_remove_listener", 00:04:16.635 "nvmf_subsystem_add_listener", 00:04:16.635 "nvmf_delete_subsystem", 00:04:16.635 "nvmf_create_subsystem", 00:04:16.635 "nvmf_get_subsystems", 00:04:16.635 "env_dpdk_get_mem_stats", 00:04:16.635 "nbd_get_disks", 00:04:16.635 "nbd_stop_disk", 00:04:16.635 "nbd_start_disk", 00:04:16.635 "ublk_recover_disk", 00:04:16.635 "ublk_get_disks", 00:04:16.635 "ublk_stop_disk", 00:04:16.635 "ublk_start_disk", 00:04:16.635 "ublk_destroy_target", 00:04:16.635 "ublk_create_target", 00:04:16.635 "virtio_blk_create_transport", 00:04:16.635 "virtio_blk_get_transports", 00:04:16.635 "vhost_controller_set_coalescing", 00:04:16.635 "vhost_get_controllers", 00:04:16.635 "vhost_delete_controller", 00:04:16.635 "vhost_create_blk_controller", 00:04:16.635 "vhost_scsi_controller_remove_target", 00:04:16.635 "vhost_scsi_controller_add_target", 00:04:16.635 "vhost_start_scsi_controller", 00:04:16.635 "vhost_create_scsi_controller", 00:04:16.635 "thread_set_cpumask", 00:04:16.635 "scheduler_set_options", 00:04:16.635 "framework_get_governor", 00:04:16.635 "framework_get_scheduler", 00:04:16.635 "framework_set_scheduler", 00:04:16.635 "framework_get_reactors", 00:04:16.635 "thread_get_io_channels", 00:04:16.635 "thread_get_pollers", 00:04:16.635 "thread_get_stats", 00:04:16.635 "framework_monitor_context_switch", 00:04:16.635 "spdk_kill_instance", 00:04:16.635 "log_enable_timestamps", 00:04:16.635 "log_get_flags", 00:04:16.635 "log_clear_flag", 00:04:16.635 "log_set_flag", 00:04:16.635 "log_get_level", 00:04:16.635 "log_set_level", 00:04:16.635 "log_get_print_level", 00:04:16.635 "log_set_print_level", 00:04:16.635 "framework_enable_cpumask_locks", 00:04:16.635 "framework_disable_cpumask_locks", 00:04:16.635 "framework_wait_init", 00:04:16.635 "framework_start_init", 00:04:16.635 "scsi_get_devices", 00:04:16.635 "bdev_get_histogram", 00:04:16.635 "bdev_enable_histogram", 00:04:16.635 "bdev_set_qos_limit", 00:04:16.635 "bdev_set_qd_sampling_period", 00:04:16.635 "bdev_get_bdevs", 00:04:16.635 "bdev_reset_iostat", 00:04:16.635 "bdev_get_iostat", 00:04:16.635 "bdev_examine", 00:04:16.635 "bdev_wait_for_examine", 00:04:16.635 "bdev_set_options", 00:04:16.635 "accel_get_stats", 00:04:16.635 "accel_set_options", 00:04:16.635 "accel_set_driver", 00:04:16.635 "accel_crypto_key_destroy", 00:04:16.635 "accel_crypto_keys_get", 00:04:16.635 "accel_crypto_key_create", 00:04:16.635 "accel_assign_opc", 00:04:16.635 "accel_get_module_info", 00:04:16.635 "accel_get_opc_assignments", 00:04:16.635 "vmd_rescan", 00:04:16.635 "vmd_remove_device", 00:04:16.635 "vmd_enable", 00:04:16.635 "sock_get_default_impl", 00:04:16.635 "sock_set_default_impl", 00:04:16.635 "sock_impl_set_options", 00:04:16.635 "sock_impl_get_options", 00:04:16.635 "iobuf_get_stats", 00:04:16.635 "iobuf_set_options", 00:04:16.635 "keyring_get_keys", 00:04:16.635 "framework_get_pci_devices", 00:04:16.635 "framework_get_config", 00:04:16.635 "framework_get_subsystems", 00:04:16.635 "fsdev_set_opts", 00:04:16.635 "fsdev_get_opts", 00:04:16.635 "trace_get_info", 00:04:16.635 "trace_get_tpoint_group_mask", 00:04:16.635 "trace_disable_tpoint_group", 00:04:16.635 "trace_enable_tpoint_group", 00:04:16.635 "trace_clear_tpoint_mask", 00:04:16.635 "trace_set_tpoint_mask", 00:04:16.635 "notify_get_notifications", 00:04:16.635 "notify_get_types", 00:04:16.635 "spdk_get_version", 00:04:16.635 "rpc_get_methods" 00:04:16.635 ] 00:04:16.635 23:41:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:16.635 23:41:49 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.635 23:41:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.635 23:41:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:16.635 23:41:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58022 00:04:16.635 23:41:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58022 ']' 00:04:16.635 23:41:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58022 00:04:16.635 23:41:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:16.635 23:41:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.635 23:41:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58022 00:04:16.635 killing process with pid 58022 00:04:16.635 23:41:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.635 23:41:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.636 23:41:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58022' 00:04:16.636 23:41:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58022 00:04:16.636 23:41:49 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58022 00:04:18.007 ************************************ 00:04:18.007 END TEST spdkcli_tcp 00:04:18.007 ************************************ 00:04:18.007 00:04:18.007 real 0m2.833s 00:04:18.007 user 0m5.110s 00:04:18.007 sys 0m0.449s 00:04:18.007 23:41:50 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.007 23:41:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.265 23:41:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:18.265 23:41:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.265 23:41:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.265 23:41:50 -- common/autotest_common.sh@10 -- # set +x 00:04:18.265 ************************************ 00:04:18.265 START TEST dpdk_mem_utility 00:04:18.265 ************************************ 00:04:18.265 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:18.265 * Looking for test storage... 00:04:18.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:18.265 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:18.265 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:18.265 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:18.265 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.265 23:41:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:18.265 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.265 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:18.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.265 --rc genhtml_branch_coverage=1 00:04:18.265 --rc genhtml_function_coverage=1 00:04:18.265 --rc genhtml_legend=1 00:04:18.265 --rc geninfo_all_blocks=1 00:04:18.265 --rc geninfo_unexecuted_blocks=1 00:04:18.265 00:04:18.265 ' 00:04:18.265 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:18.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.265 --rc genhtml_branch_coverage=1 00:04:18.265 --rc genhtml_function_coverage=1 00:04:18.265 --rc genhtml_legend=1 00:04:18.265 --rc geninfo_all_blocks=1 00:04:18.265 --rc geninfo_unexecuted_blocks=1 00:04:18.265 00:04:18.265 ' 00:04:18.265 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:18.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.265 --rc genhtml_branch_coverage=1 00:04:18.265 --rc genhtml_function_coverage=1 00:04:18.265 --rc genhtml_legend=1 00:04:18.265 --rc geninfo_all_blocks=1 00:04:18.265 --rc geninfo_unexecuted_blocks=1 00:04:18.265 00:04:18.265 ' 00:04:18.265 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:18.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.265 --rc genhtml_branch_coverage=1 00:04:18.265 --rc genhtml_function_coverage=1 00:04:18.265 --rc genhtml_legend=1 00:04:18.266 --rc geninfo_all_blocks=1 00:04:18.266 --rc geninfo_unexecuted_blocks=1 00:04:18.266 00:04:18.266 ' 00:04:18.266 23:41:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:18.266 23:41:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58133 00:04:18.266 23:41:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58133 00:04:18.266 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58133 ']' 00:04:18.266 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.266 23:41:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:18.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.266 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.266 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.266 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.266 23:41:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:18.266 [2024-12-05 23:41:50.943427] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:18.266 [2024-12-05 23:41:50.943681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58133 ] 00:04:18.523 [2024-12-05 23:41:51.096004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.523 [2024-12-05 23:41:51.193766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.087 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.088 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:19.088 23:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:19.088 23:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:19.088 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.088 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.088 { 00:04:19.088 "filename": "/tmp/spdk_mem_dump.txt" 00:04:19.088 } 00:04:19.088 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.088 23:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:19.346 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:19.346 1 heaps totaling size 824.000000 MiB 00:04:19.346 size: 824.000000 MiB heap id: 0 00:04:19.346 end heaps---------- 00:04:19.346 9 mempools totaling size 603.782043 MiB 00:04:19.346 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:19.346 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:19.346 size: 100.555481 MiB name: bdev_io_58133 00:04:19.346 size: 50.003479 MiB name: msgpool_58133 00:04:19.346 size: 36.509338 MiB name: fsdev_io_58133 00:04:19.346 size: 21.763794 MiB name: PDU_Pool 00:04:19.346 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:19.346 size: 4.133484 MiB name: evtpool_58133 00:04:19.346 size: 0.026123 MiB name: Session_Pool 00:04:19.346 end mempools------- 00:04:19.346 6 memzones totaling size 4.142822 MiB 00:04:19.346 size: 1.000366 MiB name: RG_ring_0_58133 00:04:19.346 size: 1.000366 MiB name: RG_ring_1_58133 00:04:19.346 size: 1.000366 MiB name: RG_ring_4_58133 00:04:19.346 size: 1.000366 MiB name: RG_ring_5_58133 00:04:19.346 size: 0.125366 MiB name: RG_ring_2_58133 00:04:19.346 size: 0.015991 MiB name: RG_ring_3_58133 00:04:19.346 end memzones------- 00:04:19.346 23:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:19.346 heap id: 0 total size: 824.000000 MiB number of busy elements: 325 number of free elements: 18 00:04:19.346 list of free elements. size: 16.778931 MiB 00:04:19.346 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:19.346 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:19.346 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:19.346 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:19.346 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:19.347 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:19.347 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:19.347 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:19.347 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:19.347 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:19.347 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:19.347 element at address: 0x20001b400000 with size: 0.559509 MiB 00:04:19.347 element at address: 0x200000c00000 with size: 0.489685 MiB 00:04:19.347 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:19.347 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:19.347 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:19.347 element at address: 0x200028800000 with size: 0.390930 MiB 00:04:19.347 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:19.347 list of standard malloc elements. size: 199.290161 MiB 00:04:19.347 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:19.347 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:19.347 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:19.347 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:19.347 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:19.347 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:19.347 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:19.347 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:19.347 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:19.347 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:19.347 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:19.347 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:19.347 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:19.348 element at address: 0x200028864140 with size: 0.000244 MiB 00:04:19.348 element at address: 0x200028864240 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886af00 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:19.348 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:19.349 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:19.349 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:19.349 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:19.349 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:19.349 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:19.349 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:19.349 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:19.349 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:19.349 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:19.349 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:19.349 list of memzone associated elements. size: 607.930908 MiB 00:04:19.349 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:19.349 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:19.349 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:19.349 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:19.349 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:19.349 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58133_0 00:04:19.349 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:19.349 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58133_0 00:04:19.349 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:19.349 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58133_0 00:04:19.349 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:19.349 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:19.349 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:19.349 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:19.349 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:19.349 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58133_0 00:04:19.349 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:19.349 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58133 00:04:19.349 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:19.349 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58133 00:04:19.349 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:19.349 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:19.349 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:19.349 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:19.349 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:19.349 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:19.349 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:19.349 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:19.349 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:19.349 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58133 00:04:19.349 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:19.349 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58133 00:04:19.349 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:19.349 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58133 00:04:19.349 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:19.349 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58133 00:04:19.349 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:19.349 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58133 00:04:19.349 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:19.349 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58133 00:04:19.349 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:19.349 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:19.349 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:19.349 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:19.349 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:19.349 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:19.349 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:19.349 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58133 00:04:19.349 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:19.349 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58133 00:04:19.349 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:19.349 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:19.349 element at address: 0x200028864340 with size: 0.023804 MiB 00:04:19.349 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:19.349 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:19.349 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58133 00:04:19.349 element at address: 0x20002886a4c0 with size: 0.002502 MiB 00:04:19.349 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:19.349 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:19.349 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58133 00:04:19.349 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:19.349 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58133 00:04:19.349 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:19.349 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58133 00:04:19.349 element at address: 0x20002886b000 with size: 0.000366 MiB 00:04:19.349 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:19.349 23:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:19.349 23:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58133 00:04:19.349 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58133 ']' 00:04:19.349 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58133 00:04:19.349 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:19.349 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.349 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58133 00:04:19.349 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.349 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.349 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58133' 00:04:19.349 killing process with pid 58133 00:04:19.349 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58133 00:04:19.349 23:41:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58133 00:04:21.248 00:04:21.248 real 0m2.714s 00:04:21.248 user 0m2.737s 00:04:21.248 sys 0m0.385s 00:04:21.248 ************************************ 00:04:21.248 END TEST dpdk_mem_utility 00:04:21.248 ************************************ 00:04:21.248 23:41:53 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.248 23:41:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:21.248 23:41:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:21.248 23:41:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.248 23:41:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.248 23:41:53 -- common/autotest_common.sh@10 -- # set +x 00:04:21.248 ************************************ 00:04:21.248 START TEST event 00:04:21.248 ************************************ 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:21.248 * Looking for test storage... 00:04:21.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:21.248 23:41:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.248 23:41:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.248 23:41:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.248 23:41:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.248 23:41:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.248 23:41:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.248 23:41:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.248 23:41:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.248 23:41:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.248 23:41:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.248 23:41:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.248 23:41:53 event -- scripts/common.sh@344 -- # case "$op" in 00:04:21.248 23:41:53 event -- scripts/common.sh@345 -- # : 1 00:04:21.248 23:41:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.248 23:41:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.248 23:41:53 event -- scripts/common.sh@365 -- # decimal 1 00:04:21.248 23:41:53 event -- scripts/common.sh@353 -- # local d=1 00:04:21.248 23:41:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.248 23:41:53 event -- scripts/common.sh@355 -- # echo 1 00:04:21.248 23:41:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.248 23:41:53 event -- scripts/common.sh@366 -- # decimal 2 00:04:21.248 23:41:53 event -- scripts/common.sh@353 -- # local d=2 00:04:21.248 23:41:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.248 23:41:53 event -- scripts/common.sh@355 -- # echo 2 00:04:21.248 23:41:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.248 23:41:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.248 23:41:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.248 23:41:53 event -- scripts/common.sh@368 -- # return 0 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:21.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.248 --rc genhtml_branch_coverage=1 00:04:21.248 --rc genhtml_function_coverage=1 00:04:21.248 --rc genhtml_legend=1 00:04:21.248 --rc geninfo_all_blocks=1 00:04:21.248 --rc geninfo_unexecuted_blocks=1 00:04:21.248 00:04:21.248 ' 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:21.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.248 --rc genhtml_branch_coverage=1 00:04:21.248 --rc genhtml_function_coverage=1 00:04:21.248 --rc genhtml_legend=1 00:04:21.248 --rc geninfo_all_blocks=1 00:04:21.248 --rc geninfo_unexecuted_blocks=1 00:04:21.248 00:04:21.248 ' 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:21.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.248 --rc genhtml_branch_coverage=1 00:04:21.248 --rc genhtml_function_coverage=1 00:04:21.248 --rc genhtml_legend=1 00:04:21.248 --rc geninfo_all_blocks=1 00:04:21.248 --rc geninfo_unexecuted_blocks=1 00:04:21.248 00:04:21.248 ' 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:21.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.248 --rc genhtml_branch_coverage=1 00:04:21.248 --rc genhtml_function_coverage=1 00:04:21.248 --rc genhtml_legend=1 00:04:21.248 --rc geninfo_all_blocks=1 00:04:21.248 --rc geninfo_unexecuted_blocks=1 00:04:21.248 00:04:21.248 ' 00:04:21.248 23:41:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:21.248 23:41:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:21.248 23:41:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:21.248 23:41:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.248 23:41:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.248 ************************************ 00:04:21.248 START TEST event_perf 00:04:21.248 ************************************ 00:04:21.248 23:41:53 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:21.248 Running I/O for 1 seconds...[2024-12-05 23:41:53.676334] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:21.248 [2024-12-05 23:41:53.676523] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58230 ] 00:04:21.248 [2024-12-05 23:41:53.835818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:21.248 [2024-12-05 23:41:53.937880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.248 [2024-12-05 23:41:53.938245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:21.248 [2024-12-05 23:41:53.938171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:21.248 [2024-12-05 23:41:53.938235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.619 Running I/O for 1 seconds... 00:04:22.619 lcore 0: 202255 00:04:22.619 lcore 1: 202254 00:04:22.619 lcore 2: 202253 00:04:22.619 lcore 3: 202254 00:04:22.619 done. 00:04:22.619 00:04:22.619 real 0m1.459s 00:04:22.619 user 0m4.257s 00:04:22.619 sys 0m0.081s 00:04:22.619 23:41:55 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.619 ************************************ 00:04:22.619 END TEST event_perf 00:04:22.619 ************************************ 00:04:22.619 23:41:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:22.619 23:41:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:22.619 23:41:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:22.619 23:41:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.619 23:41:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.619 ************************************ 00:04:22.619 START TEST event_reactor 00:04:22.619 ************************************ 00:04:22.619 23:41:55 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:22.619 [2024-12-05 23:41:55.173320] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:22.619 [2024-12-05 23:41:55.173612] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58264 ] 00:04:22.876 [2024-12-05 23:41:55.341581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.876 [2024-12-05 23:41:55.439165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.246 test_start 00:04:24.246 oneshot 00:04:24.246 tick 100 00:04:24.246 tick 100 00:04:24.246 tick 250 00:04:24.246 tick 100 00:04:24.246 tick 100 00:04:24.246 tick 250 00:04:24.246 tick 100 00:04:24.246 tick 500 00:04:24.246 tick 100 00:04:24.246 tick 100 00:04:24.246 tick 250 00:04:24.246 tick 100 00:04:24.246 tick 100 00:04:24.246 test_end 00:04:24.246 00:04:24.246 real 0m1.422s 00:04:24.246 user 0m1.241s 00:04:24.246 sys 0m0.071s 00:04:24.246 23:41:56 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.246 23:41:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:24.246 ************************************ 00:04:24.246 END TEST event_reactor 00:04:24.246 ************************************ 00:04:24.246 23:41:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:24.246 23:41:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:24.246 23:41:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.246 23:41:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.246 ************************************ 00:04:24.246 START TEST event_reactor_perf 00:04:24.246 ************************************ 00:04:24.246 23:41:56 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:24.246 [2024-12-05 23:41:56.637905] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:24.246 [2024-12-05 23:41:56.638135] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58300 ] 00:04:24.246 [2024-12-05 23:41:56.793662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.246 [2024-12-05 23:41:56.874527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.627 test_start 00:04:25.627 test_end 00:04:25.627 Performance: 397515 events per second 00:04:25.627 00:04:25.627 real 0m1.391s 00:04:25.627 user 0m1.212s 00:04:25.627 sys 0m0.072s 00:04:25.627 23:41:57 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.627 ************************************ 00:04:25.627 END TEST event_reactor_perf 00:04:25.627 ************************************ 00:04:25.627 23:41:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:25.627 23:41:58 event -- event/event.sh@49 -- # uname -s 00:04:25.627 23:41:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:25.627 23:41:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:25.627 23:41:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.627 23:41:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.627 23:41:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.627 ************************************ 00:04:25.627 START TEST event_scheduler 00:04:25.627 ************************************ 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:25.627 * Looking for test storage... 00:04:25.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.627 23:41:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:25.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.627 --rc genhtml_branch_coverage=1 00:04:25.627 --rc genhtml_function_coverage=1 00:04:25.627 --rc genhtml_legend=1 00:04:25.627 --rc geninfo_all_blocks=1 00:04:25.627 --rc geninfo_unexecuted_blocks=1 00:04:25.627 00:04:25.627 ' 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:25.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.627 --rc genhtml_branch_coverage=1 00:04:25.627 --rc genhtml_function_coverage=1 00:04:25.627 --rc genhtml_legend=1 00:04:25.627 --rc geninfo_all_blocks=1 00:04:25.627 --rc geninfo_unexecuted_blocks=1 00:04:25.627 00:04:25.627 ' 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:25.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.627 --rc genhtml_branch_coverage=1 00:04:25.627 --rc genhtml_function_coverage=1 00:04:25.627 --rc genhtml_legend=1 00:04:25.627 --rc geninfo_all_blocks=1 00:04:25.627 --rc geninfo_unexecuted_blocks=1 00:04:25.627 00:04:25.627 ' 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:25.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.627 --rc genhtml_branch_coverage=1 00:04:25.627 --rc genhtml_function_coverage=1 00:04:25.627 --rc genhtml_legend=1 00:04:25.627 --rc geninfo_all_blocks=1 00:04:25.627 --rc geninfo_unexecuted_blocks=1 00:04:25.627 00:04:25.627 ' 00:04:25.627 23:41:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:25.627 23:41:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58371 00:04:25.627 23:41:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.627 23:41:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58371 00:04:25.627 23:41:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58371 ']' 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.627 23:41:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.627 [2024-12-05 23:41:58.240163] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:25.627 [2024-12-05 23:41:58.240283] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58371 ] 00:04:25.887 [2024-12-05 23:41:58.396837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:25.887 [2024-12-05 23:41:58.509416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.887 [2024-12-05 23:41:58.509785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.887 [2024-12-05 23:41:58.509924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.887 [2024-12-05 23:41:58.509934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:26.458 23:41:59 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.458 23:41:59 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:26.458 23:41:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:26.458 23:41:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.458 23:41:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:26.458 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:26.458 POWER: Cannot set governor of lcore 0 to userspace 00:04:26.458 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:26.458 POWER: Cannot set governor of lcore 0 to performance 00:04:26.458 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:26.458 POWER: Cannot set governor of lcore 0 to userspace 00:04:26.458 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:26.458 POWER: Cannot set governor of lcore 0 to userspace 00:04:26.458 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:26.458 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:26.458 POWER: Unable to set Power Management Environment for lcore 0 00:04:26.458 [2024-12-05 23:41:59.083495] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:26.458 [2024-12-05 23:41:59.083515] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:26.458 [2024-12-05 23:41:59.083524] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:26.458 [2024-12-05 23:41:59.083539] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:26.458 [2024-12-05 23:41:59.083547] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:26.458 [2024-12-05 23:41:59.083556] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:26.458 23:41:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.458 23:41:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:26.458 23:41:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.458 23:41:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 [2024-12-05 23:41:59.309726] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:26.716 23:41:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:26.716 23:41:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.716 23:41:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 ************************************ 00:04:26.716 START TEST scheduler_create_thread 00:04:26.716 ************************************ 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 2 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 3 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 4 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 5 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 6 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 7 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 8 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 9 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 10 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.716 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.975 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.975 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:26.975 23:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:26.975 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.975 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.235 ************************************ 00:04:27.235 END TEST scheduler_create_thread 00:04:27.235 ************************************ 00:04:27.235 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.235 00:04:27.235 real 0m0.591s 00:04:27.235 user 0m0.012s 00:04:27.235 sys 0m0.006s 00:04:27.235 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.235 23:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.496 23:41:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:27.496 23:41:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58371 00:04:27.496 23:41:59 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58371 ']' 00:04:27.496 23:41:59 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58371 00:04:27.496 23:41:59 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:27.496 23:41:59 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.496 23:41:59 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58371 00:04:27.496 killing process with pid 58371 00:04:27.496 23:41:59 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:27.496 23:41:59 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:27.496 23:41:59 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58371' 00:04:27.496 23:41:59 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58371 00:04:27.496 23:41:59 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58371 00:04:27.757 [2024-12-05 23:42:00.391052] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:28.695 00:04:28.695 real 0m3.082s 00:04:28.695 user 0m5.886s 00:04:28.695 sys 0m0.360s 00:04:28.695 ************************************ 00:04:28.695 END TEST event_scheduler 00:04:28.695 ************************************ 00:04:28.695 23:42:01 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.695 23:42:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.695 23:42:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:28.695 23:42:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:28.695 23:42:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.695 23:42:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.695 23:42:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.695 ************************************ 00:04:28.695 START TEST app_repeat 00:04:28.695 ************************************ 00:04:28.695 23:42:01 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:28.695 Process app_repeat pid: 58455 00:04:28.695 spdk_app_start Round 0 00:04:28.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58455 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58455' 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:28.695 23:42:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58455 /var/tmp/spdk-nbd.sock 00:04:28.695 23:42:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58455 ']' 00:04:28.695 23:42:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.695 23:42:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.695 23:42:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.695 23:42:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.695 23:42:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.695 [2024-12-05 23:42:01.201831] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:28.695 [2024-12-05 23:42:01.201943] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58455 ] 00:04:28.695 [2024-12-05 23:42:01.362937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.953 [2024-12-05 23:42:01.461707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.953 [2024-12-05 23:42:01.461923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.570 23:42:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.570 23:42:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:29.570 23:42:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.829 Malloc0 00:04:29.829 23:42:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.087 Malloc1 00:04:30.087 23:42:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.087 /dev/nbd0 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.087 23:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.088 1+0 records in 00:04:30.088 1+0 records out 00:04:30.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201078 s, 20.4 MB/s 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:30.088 23:42:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:30.088 23:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.088 23:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.088 23:42:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.346 /dev/nbd1 00:04:30.346 23:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.346 23:42:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.346 1+0 records in 00:04:30.346 1+0 records out 00:04:30.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195013 s, 21.0 MB/s 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:30.346 23:42:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:30.346 23:42:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.346 23:42:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.346 23:42:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.346 23:42:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.346 23:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.603 { 00:04:30.603 "nbd_device": "/dev/nbd0", 00:04:30.603 "bdev_name": "Malloc0" 00:04:30.603 }, 00:04:30.603 { 00:04:30.603 "nbd_device": "/dev/nbd1", 00:04:30.603 "bdev_name": "Malloc1" 00:04:30.603 } 00:04:30.603 ]' 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.603 { 00:04:30.603 "nbd_device": "/dev/nbd0", 00:04:30.603 "bdev_name": "Malloc0" 00:04:30.603 }, 00:04:30.603 { 00:04:30.603 "nbd_device": "/dev/nbd1", 00:04:30.603 "bdev_name": "Malloc1" 00:04:30.603 } 00:04:30.603 ]' 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.603 /dev/nbd1' 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.603 /dev/nbd1' 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.603 256+0 records in 00:04:30.603 256+0 records out 00:04:30.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00685524 s, 153 MB/s 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.603 256+0 records in 00:04:30.603 256+0 records out 00:04:30.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203825 s, 51.4 MB/s 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.603 23:42:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.861 256+0 records in 00:04:30.861 256+0 records out 00:04:30.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205076 s, 51.1 MB/s 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.861 23:42:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.118 23:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.374 23:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.374 23:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.374 23:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.374 23:42:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.374 23:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.374 23:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.374 23:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.374 23:42:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.374 23:42:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.374 23:42:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.374 23:42:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.374 23:42:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.374 23:42:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.939 23:42:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:32.502 [2024-12-05 23:42:05.072027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.502 [2024-12-05 23:42:05.168347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.502 [2024-12-05 23:42:05.168464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.758 [2024-12-05 23:42:05.281909] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.758 [2024-12-05 23:42:05.281979] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.653 spdk_app_start Round 1 00:04:34.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.653 23:42:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.653 23:42:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:34.653 23:42:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58455 /var/tmp/spdk-nbd.sock 00:04:34.653 23:42:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58455 ']' 00:04:34.653 23:42:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.653 23:42:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.653 23:42:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.653 23:42:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.653 23:42:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.913 23:42:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.913 23:42:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:34.913 23:42:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.172 Malloc0 00:04:35.172 23:42:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.429 Malloc1 00:04:35.429 23:42:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.429 23:42:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:35.686 /dev/nbd0 00:04:35.686 23:42:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:35.686 23:42:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.686 1+0 records in 00:04:35.686 1+0 records out 00:04:35.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231847 s, 17.7 MB/s 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:35.686 23:42:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:35.686 23:42:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.686 23:42:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.686 23:42:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:35.943 /dev/nbd1 00:04:35.943 23:42:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:35.943 23:42:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.943 1+0 records in 00:04:35.943 1+0 records out 00:04:35.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208971 s, 19.6 MB/s 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:35.943 23:42:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:35.943 23:42:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.943 23:42:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.943 23:42:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.943 23:42:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.943 23:42:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.200 { 00:04:36.200 "nbd_device": "/dev/nbd0", 00:04:36.200 "bdev_name": "Malloc0" 00:04:36.200 }, 00:04:36.200 { 00:04:36.200 "nbd_device": "/dev/nbd1", 00:04:36.200 "bdev_name": "Malloc1" 00:04:36.200 } 00:04:36.200 ]' 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.200 { 00:04:36.200 "nbd_device": "/dev/nbd0", 00:04:36.200 "bdev_name": "Malloc0" 00:04:36.200 }, 00:04:36.200 { 00:04:36.200 "nbd_device": "/dev/nbd1", 00:04:36.200 "bdev_name": "Malloc1" 00:04:36.200 } 00:04:36.200 ]' 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.200 /dev/nbd1' 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.200 /dev/nbd1' 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.200 256+0 records in 00:04:36.200 256+0 records out 00:04:36.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605519 s, 173 MB/s 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.200 23:42:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.200 256+0 records in 00:04:36.201 256+0 records out 00:04:36.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150546 s, 69.7 MB/s 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.201 256+0 records in 00:04:36.201 256+0 records out 00:04:36.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166109 s, 63.1 MB/s 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.201 23:42:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:36.458 23:42:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:36.458 23:42:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:36.458 23:42:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:36.458 23:42:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.458 23:42:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.458 23:42:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:36.458 23:42:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.458 23:42:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.458 23:42:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.458 23:42:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.716 23:42:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:36.976 23:42:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:36.976 23:42:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:37.234 23:42:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:37.801 [2024-12-05 23:42:10.334916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.801 [2024-12-05 23:42:10.412796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.801 [2024-12-05 23:42:10.412961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.061 [2024-12-05 23:42:10.517142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.061 [2024-12-05 23:42:10.517206] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:40.600 23:42:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:40.600 spdk_app_start Round 2 00:04:40.600 23:42:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:40.600 23:42:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58455 /var/tmp/spdk-nbd.sock 00:04:40.600 23:42:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58455 ']' 00:04:40.600 23:42:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.600 23:42:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.600 23:42:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.600 23:42:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.600 23:42:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.600 23:42:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.600 23:42:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:40.600 23:42:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.600 Malloc0 00:04:40.600 23:42:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.861 Malloc1 00:04:40.861 23:42:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.861 23:42:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:41.122 /dev/nbd0 00:04:41.122 23:42:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:41.122 23:42:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.122 1+0 records in 00:04:41.122 1+0 records out 00:04:41.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481248 s, 8.5 MB/s 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:41.122 23:42:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:41.122 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.122 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.122 23:42:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.382 /dev/nbd1 00:04:41.382 23:42:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.382 23:42:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.382 1+0 records in 00:04:41.382 1+0 records out 00:04:41.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223191 s, 18.4 MB/s 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:41.382 23:42:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:41.382 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.382 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.382 23:42:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.382 23:42:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.382 23:42:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.643 { 00:04:41.643 "nbd_device": "/dev/nbd0", 00:04:41.643 "bdev_name": "Malloc0" 00:04:41.643 }, 00:04:41.643 { 00:04:41.643 "nbd_device": "/dev/nbd1", 00:04:41.643 "bdev_name": "Malloc1" 00:04:41.643 } 00:04:41.643 ]' 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.643 { 00:04:41.643 "nbd_device": "/dev/nbd0", 00:04:41.643 "bdev_name": "Malloc0" 00:04:41.643 }, 00:04:41.643 { 00:04:41.643 "nbd_device": "/dev/nbd1", 00:04:41.643 "bdev_name": "Malloc1" 00:04:41.643 } 00:04:41.643 ]' 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.643 /dev/nbd1' 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.643 /dev/nbd1' 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.643 256+0 records in 00:04:41.643 256+0 records out 00:04:41.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00873161 s, 120 MB/s 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.643 256+0 records in 00:04:41.643 256+0 records out 00:04:41.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221896 s, 47.3 MB/s 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.643 256+0 records in 00:04:41.643 256+0 records out 00:04:41.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201523 s, 52.0 MB/s 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.643 23:42:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.644 23:42:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.903 23:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.903 23:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.903 23:42:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.903 23:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.903 23:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.903 23:42:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.903 23:42:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.903 23:42:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.903 23:42:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.903 23:42:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.163 23:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.423 23:42:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.423 23:42:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.683 23:42:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:43.621 [2024-12-05 23:42:16.033782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.621 [2024-12-05 23:42:16.154255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.621 [2024-12-05 23:42:16.154400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.621 [2024-12-05 23:42:16.281905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:43.621 [2024-12-05 23:42:16.281997] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:46.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.174 23:42:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58455 /var/tmp/spdk-nbd.sock 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58455 ']' 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:46.174 23:42:18 event.app_repeat -- event/event.sh@39 -- # killprocess 58455 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58455 ']' 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58455 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58455 00:04:46.174 killing process with pid 58455 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58455' 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58455 00:04:46.174 23:42:18 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58455 00:04:46.442 spdk_app_start is called in Round 0. 00:04:46.442 Shutdown signal received, stop current app iteration 00:04:46.442 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 reinitialization... 00:04:46.442 spdk_app_start is called in Round 1. 00:04:46.442 Shutdown signal received, stop current app iteration 00:04:46.442 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 reinitialization... 00:04:46.442 spdk_app_start is called in Round 2. 00:04:46.442 Shutdown signal received, stop current app iteration 00:04:46.442 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 reinitialization... 00:04:46.442 spdk_app_start is called in Round 3. 00:04:46.442 Shutdown signal received, stop current app iteration 00:04:46.442 ************************************ 00:04:46.442 END TEST app_repeat 00:04:46.442 ************************************ 00:04:46.442 23:42:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:46.442 23:42:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:46.442 00:04:46.442 real 0m17.846s 00:04:46.442 user 0m39.043s 00:04:46.442 sys 0m2.072s 00:04:46.442 23:42:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.442 23:42:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.442 23:42:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:46.442 23:42:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:46.442 23:42:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.442 23:42:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.442 23:42:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.442 ************************************ 00:04:46.442 START TEST cpu_locks 00:04:46.442 ************************************ 00:04:46.442 23:42:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:46.442 * Looking for test storage... 00:04:46.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:46.443 23:42:19 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.443 23:42:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.443 23:42:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.704 23:42:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.704 23:42:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:46.704 23:42:19 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.704 23:42:19 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.704 --rc genhtml_branch_coverage=1 00:04:46.704 --rc genhtml_function_coverage=1 00:04:46.704 --rc genhtml_legend=1 00:04:46.704 --rc geninfo_all_blocks=1 00:04:46.704 --rc geninfo_unexecuted_blocks=1 00:04:46.704 00:04:46.704 ' 00:04:46.704 23:42:19 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.704 --rc genhtml_branch_coverage=1 00:04:46.704 --rc genhtml_function_coverage=1 00:04:46.704 --rc genhtml_legend=1 00:04:46.704 --rc geninfo_all_blocks=1 00:04:46.704 --rc geninfo_unexecuted_blocks=1 00:04:46.704 00:04:46.704 ' 00:04:46.704 23:42:19 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.704 --rc genhtml_branch_coverage=1 00:04:46.704 --rc genhtml_function_coverage=1 00:04:46.704 --rc genhtml_legend=1 00:04:46.704 --rc geninfo_all_blocks=1 00:04:46.704 --rc geninfo_unexecuted_blocks=1 00:04:46.704 00:04:46.704 ' 00:04:46.704 23:42:19 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.704 --rc genhtml_branch_coverage=1 00:04:46.704 --rc genhtml_function_coverage=1 00:04:46.704 --rc genhtml_legend=1 00:04:46.704 --rc geninfo_all_blocks=1 00:04:46.704 --rc geninfo_unexecuted_blocks=1 00:04:46.704 00:04:46.704 ' 00:04:46.704 23:42:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:46.704 23:42:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:46.704 23:42:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:46.704 23:42:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:46.704 23:42:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.704 23:42:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.704 23:42:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.704 ************************************ 00:04:46.704 START TEST default_locks 00:04:46.704 ************************************ 00:04:46.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.704 23:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:46.704 23:42:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58891 00:04:46.704 23:42:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58891 00:04:46.704 23:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58891 ']' 00:04:46.704 23:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.704 23:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.704 23:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.704 23:42:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.704 23:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.704 23:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.704 [2024-12-05 23:42:19.263314] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:46.704 [2024-12-05 23:42:19.263437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:04:46.965 [2024-12-05 23:42:19.418702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.965 [2024-12-05 23:42:19.496834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.539 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.539 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:47.539 23:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58891 00:04:47.539 23:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58891 00:04:47.539 23:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58891 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58891 ']' 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58891 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58891 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.800 killing process with pid 58891 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58891' 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58891 00:04:47.800 23:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58891 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58891 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58891 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58891 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58891 ']' 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.189 ERROR: process (pid: 58891) is no longer running 00:04:49.189 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58891) - No such process 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:49.189 00:04:49.189 real 0m2.275s 00:04:49.189 user 0m2.252s 00:04:49.189 sys 0m0.420s 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.189 ************************************ 00:04:49.189 END TEST default_locks 00:04:49.189 ************************************ 00:04:49.189 23:42:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.189 23:42:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:49.189 23:42:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.189 23:42:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.189 23:42:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.189 ************************************ 00:04:49.189 START TEST default_locks_via_rpc 00:04:49.189 ************************************ 00:04:49.189 23:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:49.189 23:42:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58944 00:04:49.189 23:42:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58944 00:04:49.189 23:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58944 ']' 00:04:49.189 23:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.189 23:42:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.189 23:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.189 23:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.189 23:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.189 23:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.189 [2024-12-05 23:42:21.582804] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:49.189 [2024-12-05 23:42:21.582925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58944 ] 00:04:49.189 [2024-12-05 23:42:21.742953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.189 [2024-12-05 23:42:21.836919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.760 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58944 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58944 00:04:49.761 23:42:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58944 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58944 ']' 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58944 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58944 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.022 killing process with pid 58944 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58944' 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58944 00:04:50.022 23:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58944 00:04:51.407 00:04:51.407 real 0m2.550s 00:04:51.407 user 0m2.526s 00:04:51.408 sys 0m0.412s 00:04:51.408 23:42:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.408 23:42:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.408 ************************************ 00:04:51.408 END TEST default_locks_via_rpc 00:04:51.408 ************************************ 00:04:51.408 23:42:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:51.408 23:42:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.408 23:42:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.408 23:42:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.408 ************************************ 00:04:51.408 START TEST non_locking_app_on_locked_coremask 00:04:51.408 ************************************ 00:04:51.408 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:51.408 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58996 00:04:51.408 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58996 /var/tmp/spdk.sock 00:04:51.408 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58996 ']' 00:04:51.408 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.408 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.408 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.408 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.408 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.408 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.667 [2024-12-05 23:42:24.160398] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:51.668 [2024-12-05 23:42:24.160480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58996 ] 00:04:51.668 [2024-12-05 23:42:24.310305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.928 [2024-12-05 23:42:24.389240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59012 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59012 /var/tmp/spdk2.sock 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59012 ']' 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.497 23:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.497 [2024-12-05 23:42:25.064059] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:52.497 [2024-12-05 23:42:25.064184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59012 ] 00:04:52.756 [2024-12-05 23:42:25.227403] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.756 [2024-12-05 23:42:25.227436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.756 [2024-12-05 23:42:25.381930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.698 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.698 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:53.698 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58996 00:04:53.698 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58996 00:04:53.698 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58996 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58996 ']' 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58996 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58996 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.959 killing process with pid 58996 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58996' 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58996 00:04:53.959 23:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58996 00:04:56.559 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59012 00:04:56.559 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59012 ']' 00:04:56.559 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59012 00:04:56.559 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:56.559 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.559 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59012 00:04:56.559 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.559 killing process with pid 59012 00:04:56.559 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.559 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59012' 00:04:56.559 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59012 00:04:56.559 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59012 00:04:57.496 00:04:57.496 real 0m6.092s 00:04:57.496 user 0m6.322s 00:04:57.496 sys 0m0.777s 00:04:57.496 23:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.496 23:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.496 ************************************ 00:04:57.496 END TEST non_locking_app_on_locked_coremask 00:04:57.496 ************************************ 00:04:57.802 23:42:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:57.802 23:42:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.802 23:42:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.802 23:42:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.802 ************************************ 00:04:57.802 START TEST locking_app_on_unlocked_coremask 00:04:57.802 ************************************ 00:04:57.802 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:57.802 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59103 00:04:57.802 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59103 /var/tmp/spdk.sock 00:04:57.802 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59103 ']' 00:04:57.802 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.802 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.802 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.802 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.802 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.802 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:57.802 [2024-12-05 23:42:30.310176] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:57.802 [2024-12-05 23:42:30.310288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59103 ] 00:04:57.802 [2024-12-05 23:42:30.466936] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:57.802 [2024-12-05 23:42:30.467003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.062 [2024-12-05 23:42:30.575564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59119 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59119 /var/tmp/spdk2.sock 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59119 ']' 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.635 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.635 [2024-12-05 23:42:31.216546] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:04:58.635 [2024-12-05 23:42:31.216661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59119 ] 00:04:58.897 [2024-12-05 23:42:31.380035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.897 [2024-12-05 23:42:31.541928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.839 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.839 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:59.839 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59119 00:04:59.839 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59119 00:04:59.839 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.101 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59103 00:05:00.101 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59103 ']' 00:05:00.101 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59103 00:05:00.101 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:00.101 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.101 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59103 00:05:00.101 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.101 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.101 killing process with pid 59103 00:05:00.102 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59103' 00:05:00.102 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59103 00:05:00.102 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59103 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59119 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59119 ']' 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59119 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59119 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59119' 00:05:02.633 killing process with pid 59119 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59119 00:05:02.633 23:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59119 00:05:04.021 00:05:04.021 real 0m6.099s 00:05:04.021 user 0m6.358s 00:05:04.021 sys 0m0.821s 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.021 ************************************ 00:05:04.021 END TEST locking_app_on_unlocked_coremask 00:05:04.021 ************************************ 00:05:04.021 23:42:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:04.021 23:42:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.021 23:42:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.021 23:42:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.021 ************************************ 00:05:04.021 START TEST locking_app_on_locked_coremask 00:05:04.021 ************************************ 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59210 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59210 /var/tmp/spdk.sock 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59210 ']' 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.021 23:42:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.021 [2024-12-05 23:42:36.436954] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:04.021 [2024-12-05 23:42:36.437067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59210 ] 00:05:04.021 [2024-12-05 23:42:36.586473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.021 [2024-12-05 23:42:36.669084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.586 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.586 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59226 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59226 /var/tmp/spdk2.sock 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59226 /var/tmp/spdk2.sock 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59226 /var/tmp/spdk2.sock 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59226 ']' 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.587 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.844 [2024-12-05 23:42:37.349871] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:04.844 [2024-12-05 23:42:37.350003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59226 ] 00:05:04.844 [2024-12-05 23:42:37.510930] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59210 has claimed it. 00:05:04.844 [2024-12-05 23:42:37.510998] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:05.411 ERROR: process (pid: 59226) is no longer running 00:05:05.411 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59226) - No such process 00:05:05.411 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.411 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:05.411 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:05.411 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.411 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.411 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.411 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59210 00:05:05.411 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59210 00:05:05.411 23:42:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59210 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59210 ']' 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59210 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59210 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.669 killing process with pid 59210 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59210' 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59210 00:05:05.669 23:42:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59210 00:05:07.046 00:05:07.046 real 0m3.006s 00:05:07.046 user 0m3.248s 00:05:07.046 sys 0m0.506s 00:05:07.046 23:42:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.046 ************************************ 00:05:07.047 END TEST locking_app_on_locked_coremask 00:05:07.047 ************************************ 00:05:07.047 23:42:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.047 23:42:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:07.047 23:42:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.047 23:42:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.047 23:42:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.047 ************************************ 00:05:07.047 START TEST locking_overlapped_coremask 00:05:07.047 ************************************ 00:05:07.047 23:42:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:07.047 23:42:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59279 00:05:07.047 23:42:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59279 /var/tmp/spdk.sock 00:05:07.047 23:42:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59279 ']' 00:05:07.047 23:42:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.047 23:42:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.047 23:42:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.047 23:42:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.047 23:42:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:07.047 23:42:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.047 [2024-12-05 23:42:39.528198] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:07.047 [2024-12-05 23:42:39.528366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59279 ] 00:05:07.047 [2024-12-05 23:42:39.701915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:07.304 [2024-12-05 23:42:39.787296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.304 [2024-12-05 23:42:39.787710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.304 [2024-12-05 23:42:39.787820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59297 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59297 /var/tmp/spdk2.sock 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59297 /var/tmp/spdk2.sock 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59297 /var/tmp/spdk2.sock 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59297 ']' 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.869 23:42:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.869 [2024-12-05 23:42:40.456194] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:07.869 [2024-12-05 23:42:40.456314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:05:08.128 [2024-12-05 23:42:40.638583] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59279 has claimed it. 00:05:08.128 [2024-12-05 23:42:40.638826] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.387 ERROR: process (pid: 59297) is no longer running 00:05:08.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59297) - No such process 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59279 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59279 ']' 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59279 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.387 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.645 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59279 00:05:08.645 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.645 killing process with pid 59279 00:05:08.645 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.645 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59279' 00:05:08.645 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59279 00:05:08.645 23:42:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59279 00:05:10.018 00:05:10.018 real 0m2.870s 00:05:10.018 user 0m7.754s 00:05:10.018 sys 0m0.462s 00:05:10.018 23:42:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.018 23:42:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.019 ************************************ 00:05:10.019 END TEST locking_overlapped_coremask 00:05:10.019 ************************************ 00:05:10.019 23:42:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:10.019 23:42:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.019 23:42:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.019 23:42:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.019 ************************************ 00:05:10.019 START TEST locking_overlapped_coremask_via_rpc 00:05:10.019 ************************************ 00:05:10.019 23:42:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:10.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.019 23:42:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59350 00:05:10.019 23:42:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59350 /var/tmp/spdk.sock 00:05:10.019 23:42:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59350 ']' 00:05:10.019 23:42:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.019 23:42:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.019 23:42:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.019 23:42:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.019 23:42:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.019 23:42:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:10.019 [2024-12-05 23:42:42.404020] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:10.019 [2024-12-05 23:42:42.404136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59350 ] 00:05:10.019 [2024-12-05 23:42:42.558161] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.019 [2024-12-05 23:42:42.558196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.019 [2024-12-05 23:42:42.642321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.019 [2024-12-05 23:42:42.643596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.019 [2024-12-05 23:42:42.643616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59368 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59368 /var/tmp/spdk2.sock 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59368 ']' 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.585 23:42:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.843 [2024-12-05 23:42:43.307568] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:10.843 [2024-12-05 23:42:43.307686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59368 ] 00:05:10.843 [2024-12-05 23:42:43.480919] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.843 [2024-12-05 23:42:43.484978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.102 [2024-12-05 23:42:43.693151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.102 [2024-12-05 23:42:43.693211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.102 [2024-12-05 23:42:43.693234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.481 [2024-12-05 23:42:44.873100] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59350 has claimed it. 00:05:12.481 request: 00:05:12.481 { 00:05:12.481 "method": "framework_enable_cpumask_locks", 00:05:12.481 "req_id": 1 00:05:12.481 } 00:05:12.481 Got JSON-RPC error response 00:05:12.481 response: 00:05:12.481 { 00:05:12.481 "code": -32603, 00:05:12.481 "message": "Failed to claim CPU core: 2" 00:05:12.481 } 00:05:12.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59350 /var/tmp/spdk.sock 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59350 ']' 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.481 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.481 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.481 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.481 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59368 /var/tmp/spdk2.sock 00:05:12.481 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59368 ']' 00:05:12.481 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.481 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.481 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.481 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.481 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.755 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.755 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.755 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:12.755 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.755 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.755 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.755 00:05:12.755 real 0m2.967s 00:05:12.755 user 0m1.065s 00:05:12.755 sys 0m0.123s 00:05:12.755 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.755 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.755 ************************************ 00:05:12.755 END TEST locking_overlapped_coremask_via_rpc 00:05:12.755 ************************************ 00:05:12.755 23:42:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:12.755 23:42:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59350 ]] 00:05:12.755 23:42:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59350 00:05:12.755 23:42:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59350 ']' 00:05:12.755 23:42:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59350 00:05:12.755 23:42:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:12.755 23:42:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.755 23:42:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59350 00:05:12.755 killing process with pid 59350 00:05:12.755 23:42:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.755 23:42:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.755 23:42:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59350' 00:05:12.755 23:42:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59350 00:05:12.755 23:42:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59350 00:05:14.147 23:42:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59368 ]] 00:05:14.147 23:42:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59368 00:05:14.147 23:42:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59368 ']' 00:05:14.147 23:42:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59368 00:05:14.147 23:42:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:14.147 23:42:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.147 23:42:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59368 00:05:14.147 killing process with pid 59368 00:05:14.147 23:42:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:14.147 23:42:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:14.148 23:42:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59368' 00:05:14.148 23:42:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59368 00:05:14.148 23:42:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59368 00:05:15.521 23:42:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:15.521 23:42:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:15.521 23:42:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59350 ]] 00:05:15.521 23:42:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59350 00:05:15.521 23:42:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59350 ']' 00:05:15.521 23:42:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59350 00:05:15.521 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59350) - No such process 00:05:15.521 23:42:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59350 is not found' 00:05:15.521 Process with pid 59350 is not found 00:05:15.521 23:42:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59368 ]] 00:05:15.521 23:42:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59368 00:05:15.521 23:42:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59368 ']' 00:05:15.521 23:42:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59368 00:05:15.521 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59368) - No such process 00:05:15.521 Process with pid 59368 is not found 00:05:15.521 23:42:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59368 is not found' 00:05:15.521 23:42:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:15.521 00:05:15.521 real 0m29.031s 00:05:15.521 user 0m50.836s 00:05:15.521 sys 0m4.313s 00:05:15.521 23:42:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.521 ************************************ 00:05:15.521 END TEST cpu_locks 00:05:15.521 23:42:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.521 ************************************ 00:05:15.521 00:05:15.521 real 0m54.620s 00:05:15.521 user 1m42.640s 00:05:15.521 sys 0m7.193s 00:05:15.521 23:42:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.522 ************************************ 00:05:15.522 END TEST event 00:05:15.522 23:42:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.522 ************************************ 00:05:15.522 23:42:48 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:15.522 23:42:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.522 23:42:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.522 23:42:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.522 ************************************ 00:05:15.522 START TEST thread 00:05:15.522 ************************************ 00:05:15.522 23:42:48 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:15.522 * Looking for test storage... 00:05:15.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:15.522 23:42:48 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.522 23:42:48 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.522 23:42:48 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.780 23:42:48 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.780 23:42:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.780 23:42:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.780 23:42:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.780 23:42:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.780 23:42:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.780 23:42:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.780 23:42:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.780 23:42:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.780 23:42:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.780 23:42:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.780 23:42:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.780 23:42:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:15.780 23:42:48 thread -- scripts/common.sh@345 -- # : 1 00:05:15.780 23:42:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.780 23:42:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.780 23:42:48 thread -- scripts/common.sh@365 -- # decimal 1 00:05:15.780 23:42:48 thread -- scripts/common.sh@353 -- # local d=1 00:05:15.780 23:42:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.780 23:42:48 thread -- scripts/common.sh@355 -- # echo 1 00:05:15.780 23:42:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.780 23:42:48 thread -- scripts/common.sh@366 -- # decimal 2 00:05:15.780 23:42:48 thread -- scripts/common.sh@353 -- # local d=2 00:05:15.780 23:42:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.780 23:42:48 thread -- scripts/common.sh@355 -- # echo 2 00:05:15.780 23:42:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.780 23:42:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.780 23:42:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.780 23:42:48 thread -- scripts/common.sh@368 -- # return 0 00:05:15.780 23:42:48 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.780 23:42:48 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.780 --rc genhtml_branch_coverage=1 00:05:15.780 --rc genhtml_function_coverage=1 00:05:15.780 --rc genhtml_legend=1 00:05:15.780 --rc geninfo_all_blocks=1 00:05:15.780 --rc geninfo_unexecuted_blocks=1 00:05:15.780 00:05:15.780 ' 00:05:15.780 23:42:48 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.781 --rc genhtml_branch_coverage=1 00:05:15.781 --rc genhtml_function_coverage=1 00:05:15.781 --rc genhtml_legend=1 00:05:15.781 --rc geninfo_all_blocks=1 00:05:15.781 --rc geninfo_unexecuted_blocks=1 00:05:15.781 00:05:15.781 ' 00:05:15.781 23:42:48 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.781 --rc genhtml_branch_coverage=1 00:05:15.781 --rc genhtml_function_coverage=1 00:05:15.781 --rc genhtml_legend=1 00:05:15.781 --rc geninfo_all_blocks=1 00:05:15.781 --rc geninfo_unexecuted_blocks=1 00:05:15.781 00:05:15.781 ' 00:05:15.781 23:42:48 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.781 --rc genhtml_branch_coverage=1 00:05:15.781 --rc genhtml_function_coverage=1 00:05:15.781 --rc genhtml_legend=1 00:05:15.781 --rc geninfo_all_blocks=1 00:05:15.781 --rc geninfo_unexecuted_blocks=1 00:05:15.781 00:05:15.781 ' 00:05:15.781 23:42:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.781 23:42:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:15.781 23:42:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.781 23:42:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.781 ************************************ 00:05:15.781 START TEST thread_poller_perf 00:05:15.781 ************************************ 00:05:15.781 23:42:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.781 [2024-12-05 23:42:48.311308] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:15.781 [2024-12-05 23:42:48.311427] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59528 ] 00:05:15.781 [2024-12-05 23:42:48.469638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.039 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:16.039 [2024-12-05 23:42:48.565156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.412 [2024-12-05T23:42:50.121Z] ====================================== 00:05:17.412 [2024-12-05T23:42:50.121Z] busy:2609985900 (cyc) 00:05:17.412 [2024-12-05T23:42:50.121Z] total_run_count: 304000 00:05:17.412 [2024-12-05T23:42:50.121Z] tsc_hz: 2600000000 (cyc) 00:05:17.412 [2024-12-05T23:42:50.121Z] ====================================== 00:05:17.412 [2024-12-05T23:42:50.121Z] poller_cost: 8585 (cyc), 3301 (nsec) 00:05:17.412 00:05:17.412 real 0m1.456s 00:05:17.412 user 0m1.279s 00:05:17.412 sys 0m0.069s 00:05:17.412 23:42:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.412 23:42:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.412 ************************************ 00:05:17.412 END TEST thread_poller_perf 00:05:17.412 ************************************ 00:05:17.412 23:42:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:17.412 23:42:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:17.412 23:42:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.412 23:42:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.412 ************************************ 00:05:17.412 START TEST thread_poller_perf 00:05:17.412 ************************************ 00:05:17.412 23:42:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:17.412 [2024-12-05 23:42:49.803829] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:17.412 [2024-12-05 23:42:49.803916] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59565 ] 00:05:17.412 [2024-12-05 23:42:49.958823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.412 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:17.412 [2024-12-05 23:42:50.059597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.784 [2024-12-05T23:42:51.493Z] ====================================== 00:05:18.784 [2024-12-05T23:42:51.493Z] busy:2603270150 (cyc) 00:05:18.784 [2024-12-05T23:42:51.493Z] total_run_count: 3642000 00:05:18.784 [2024-12-05T23:42:51.493Z] tsc_hz: 2600000000 (cyc) 00:05:18.784 [2024-12-05T23:42:51.493Z] ====================================== 00:05:18.784 [2024-12-05T23:42:51.493Z] poller_cost: 714 (cyc), 274 (nsec) 00:05:18.784 ************************************ 00:05:18.784 END TEST thread_poller_perf 00:05:18.784 ************************************ 00:05:18.784 00:05:18.784 real 0m1.443s 00:05:18.784 user 0m1.270s 00:05:18.784 sys 0m0.065s 00:05:18.784 23:42:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.784 23:42:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.784 23:42:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:18.784 00:05:18.784 real 0m3.114s 00:05:18.784 user 0m2.654s 00:05:18.784 sys 0m0.246s 00:05:18.784 23:42:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.784 23:42:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.784 ************************************ 00:05:18.784 END TEST thread 00:05:18.784 ************************************ 00:05:18.784 23:42:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:18.784 23:42:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:18.784 23:42:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.784 23:42:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.784 23:42:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.784 ************************************ 00:05:18.784 START TEST app_cmdline 00:05:18.784 ************************************ 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:18.784 * Looking for test storage... 00:05:18.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.784 23:42:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.784 --rc genhtml_branch_coverage=1 00:05:18.784 --rc genhtml_function_coverage=1 00:05:18.784 --rc genhtml_legend=1 00:05:18.784 --rc geninfo_all_blocks=1 00:05:18.784 --rc geninfo_unexecuted_blocks=1 00:05:18.784 00:05:18.784 ' 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.784 --rc genhtml_branch_coverage=1 00:05:18.784 --rc genhtml_function_coverage=1 00:05:18.784 --rc genhtml_legend=1 00:05:18.784 --rc geninfo_all_blocks=1 00:05:18.784 --rc geninfo_unexecuted_blocks=1 00:05:18.784 00:05:18.784 ' 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.784 --rc genhtml_branch_coverage=1 00:05:18.784 --rc genhtml_function_coverage=1 00:05:18.784 --rc genhtml_legend=1 00:05:18.784 --rc geninfo_all_blocks=1 00:05:18.784 --rc geninfo_unexecuted_blocks=1 00:05:18.784 00:05:18.784 ' 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.784 --rc genhtml_branch_coverage=1 00:05:18.784 --rc genhtml_function_coverage=1 00:05:18.784 --rc genhtml_legend=1 00:05:18.784 --rc geninfo_all_blocks=1 00:05:18.784 --rc geninfo_unexecuted_blocks=1 00:05:18.784 00:05:18.784 ' 00:05:18.784 23:42:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:18.784 23:42:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59648 00:05:18.784 23:42:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59648 00:05:18.784 23:42:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59648 ']' 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.784 23:42:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.784 [2024-12-05 23:42:51.483684] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:18.784 [2024-12-05 23:42:51.483826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59648 ] 00:05:19.041 [2024-12-05 23:42:51.636737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.041 [2024-12-05 23:42:51.736252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:19.973 { 00:05:19.973 "version": "SPDK v25.01-pre git sha1 a5e6ecf28", 00:05:19.973 "fields": { 00:05:19.973 "major": 25, 00:05:19.973 "minor": 1, 00:05:19.973 "patch": 0, 00:05:19.973 "suffix": "-pre", 00:05:19.973 "commit": "a5e6ecf28" 00:05:19.973 } 00:05:19.973 } 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:19.973 23:42:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:19.973 23:42:52 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:20.232 request: 00:05:20.232 { 00:05:20.232 "method": "env_dpdk_get_mem_stats", 00:05:20.232 "req_id": 1 00:05:20.232 } 00:05:20.232 Got JSON-RPC error response 00:05:20.232 response: 00:05:20.232 { 00:05:20.232 "code": -32601, 00:05:20.232 "message": "Method not found" 00:05:20.232 } 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:20.232 23:42:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59648 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59648 ']' 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59648 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59648 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59648' 00:05:20.232 killing process with pid 59648 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 59648 00:05:20.232 23:42:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 59648 00:05:21.606 ************************************ 00:05:21.606 END TEST app_cmdline 00:05:21.606 ************************************ 00:05:21.606 00:05:21.606 real 0m2.918s 00:05:21.606 user 0m3.292s 00:05:21.606 sys 0m0.423s 00:05:21.606 23:42:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.606 23:42:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:21.606 23:42:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:21.606 23:42:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.606 23:42:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.606 23:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:21.606 ************************************ 00:05:21.606 START TEST version 00:05:21.606 ************************************ 00:05:21.606 23:42:54 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:21.606 * Looking for test storage... 00:05:21.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:21.865 23:42:54 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.865 23:42:54 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.865 23:42:54 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.865 23:42:54 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.865 23:42:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.865 23:42:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.865 23:42:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.865 23:42:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.865 23:42:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.865 23:42:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.865 23:42:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.865 23:42:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.865 23:42:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.865 23:42:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.865 23:42:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.865 23:42:54 version -- scripts/common.sh@344 -- # case "$op" in 00:05:21.865 23:42:54 version -- scripts/common.sh@345 -- # : 1 00:05:21.865 23:42:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.865 23:42:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.865 23:42:54 version -- scripts/common.sh@365 -- # decimal 1 00:05:21.865 23:42:54 version -- scripts/common.sh@353 -- # local d=1 00:05:21.865 23:42:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.865 23:42:54 version -- scripts/common.sh@355 -- # echo 1 00:05:21.865 23:42:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.865 23:42:54 version -- scripts/common.sh@366 -- # decimal 2 00:05:21.865 23:42:54 version -- scripts/common.sh@353 -- # local d=2 00:05:21.866 23:42:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.866 23:42:54 version -- scripts/common.sh@355 -- # echo 2 00:05:21.866 23:42:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.866 23:42:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.866 23:42:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.866 23:42:54 version -- scripts/common.sh@368 -- # return 0 00:05:21.866 23:42:54 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.866 23:42:54 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.866 --rc genhtml_branch_coverage=1 00:05:21.866 --rc genhtml_function_coverage=1 00:05:21.866 --rc genhtml_legend=1 00:05:21.866 --rc geninfo_all_blocks=1 00:05:21.866 --rc geninfo_unexecuted_blocks=1 00:05:21.866 00:05:21.866 ' 00:05:21.866 23:42:54 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.866 --rc genhtml_branch_coverage=1 00:05:21.866 --rc genhtml_function_coverage=1 00:05:21.866 --rc genhtml_legend=1 00:05:21.866 --rc geninfo_all_blocks=1 00:05:21.866 --rc geninfo_unexecuted_blocks=1 00:05:21.866 00:05:21.866 ' 00:05:21.866 23:42:54 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.866 --rc genhtml_branch_coverage=1 00:05:21.866 --rc genhtml_function_coverage=1 00:05:21.866 --rc genhtml_legend=1 00:05:21.866 --rc geninfo_all_blocks=1 00:05:21.866 --rc geninfo_unexecuted_blocks=1 00:05:21.866 00:05:21.866 ' 00:05:21.866 23:42:54 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.866 --rc genhtml_branch_coverage=1 00:05:21.866 --rc genhtml_function_coverage=1 00:05:21.866 --rc genhtml_legend=1 00:05:21.866 --rc geninfo_all_blocks=1 00:05:21.866 --rc geninfo_unexecuted_blocks=1 00:05:21.866 00:05:21.866 ' 00:05:21.866 23:42:54 version -- app/version.sh@17 -- # get_header_version major 00:05:21.866 23:42:54 version -- app/version.sh@14 -- # cut -f2 00:05:21.866 23:42:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:21.866 23:42:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.866 23:42:54 version -- app/version.sh@17 -- # major=25 00:05:21.866 23:42:54 version -- app/version.sh@18 -- # get_header_version minor 00:05:21.866 23:42:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:21.866 23:42:54 version -- app/version.sh@14 -- # cut -f2 00:05:21.866 23:42:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.866 23:42:54 version -- app/version.sh@18 -- # minor=1 00:05:21.866 23:42:54 version -- app/version.sh@19 -- # get_header_version patch 00:05:21.866 23:42:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:21.866 23:42:54 version -- app/version.sh@14 -- # cut -f2 00:05:21.866 23:42:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.866 23:42:54 version -- app/version.sh@19 -- # patch=0 00:05:21.866 23:42:54 version -- app/version.sh@20 -- # get_header_version suffix 00:05:21.866 23:42:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:21.866 23:42:54 version -- app/version.sh@14 -- # cut -f2 00:05:21.866 23:42:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.866 23:42:54 version -- app/version.sh@20 -- # suffix=-pre 00:05:21.866 23:42:54 version -- app/version.sh@22 -- # version=25.1 00:05:21.866 23:42:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:21.866 23:42:54 version -- app/version.sh@28 -- # version=25.1rc0 00:05:21.866 23:42:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:21.866 23:42:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:21.866 23:42:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:21.866 23:42:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:21.866 00:05:21.866 real 0m0.188s 00:05:21.866 user 0m0.126s 00:05:21.866 sys 0m0.091s 00:05:21.866 23:42:54 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.866 23:42:54 version -- common/autotest_common.sh@10 -- # set +x 00:05:21.866 ************************************ 00:05:21.866 END TEST version 00:05:21.866 ************************************ 00:05:21.866 23:42:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:21.866 23:42:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:21.866 23:42:54 -- spdk/autotest.sh@194 -- # uname -s 00:05:21.866 23:42:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:21.866 23:42:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:21.866 23:42:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:21.866 23:42:54 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:21.866 23:42:54 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:21.866 23:42:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:21.866 23:42:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.866 23:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:21.866 ************************************ 00:05:21.866 START TEST blockdev_nvme 00:05:21.866 ************************************ 00:05:21.866 23:42:54 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:21.866 * Looking for test storage... 00:05:21.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:21.866 23:42:54 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.866 23:42:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.866 23:42:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.125 23:42:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.125 23:42:54 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.126 --rc genhtml_branch_coverage=1 00:05:22.126 --rc genhtml_function_coverage=1 00:05:22.126 --rc genhtml_legend=1 00:05:22.126 --rc geninfo_all_blocks=1 00:05:22.126 --rc geninfo_unexecuted_blocks=1 00:05:22.126 00:05:22.126 ' 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.126 --rc genhtml_branch_coverage=1 00:05:22.126 --rc genhtml_function_coverage=1 00:05:22.126 --rc genhtml_legend=1 00:05:22.126 --rc geninfo_all_blocks=1 00:05:22.126 --rc geninfo_unexecuted_blocks=1 00:05:22.126 00:05:22.126 ' 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.126 --rc genhtml_branch_coverage=1 00:05:22.126 --rc genhtml_function_coverage=1 00:05:22.126 --rc genhtml_legend=1 00:05:22.126 --rc geninfo_all_blocks=1 00:05:22.126 --rc geninfo_unexecuted_blocks=1 00:05:22.126 00:05:22.126 ' 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.126 --rc genhtml_branch_coverage=1 00:05:22.126 --rc genhtml_function_coverage=1 00:05:22.126 --rc genhtml_legend=1 00:05:22.126 --rc geninfo_all_blocks=1 00:05:22.126 --rc geninfo_unexecuted_blocks=1 00:05:22.126 00:05:22.126 ' 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:22.126 23:42:54 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59820 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59820 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59820 ']' 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.126 23:42:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:22.126 23:42:54 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:22.126 [2024-12-05 23:42:54.700688] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:22.126 [2024-12-05 23:42:54.701343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59820 ] 00:05:22.385 [2024-12-05 23:42:54.855645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.385 [2024-12-05 23:42:54.938484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.951 23:42:55 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.951 23:42:55 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:22.951 23:42:55 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:22.951 23:42:55 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:22.951 23:42:55 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:22.951 23:42:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:22.951 23:42:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:22.951 23:42:55 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:22.951 23:42:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.951 23:42:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.210 23:42:55 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.210 23:42:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:23.210 23:42:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.210 23:42:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.210 23:42:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:23.210 23:42:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.210 23:42:55 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:23.502 23:42:55 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:23.502 23:42:55 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:23.502 23:42:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.502 23:42:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:23.502 23:42:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.502 23:42:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:23.502 23:42:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:23.503 23:42:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4c6dac58-6128-4435-9745-f83b29c35877"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4c6dac58-6128-4435-9745-f83b29c35877",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "6d516d28-1f59-4a71-bcf6-01b910f1232e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6d516d28-1f59-4a71-bcf6-01b910f1232e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1b150b81-3081-4782-8eab-7ffdcb3fe2c3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b150b81-3081-4782-8eab-7ffdcb3fe2c3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5627abcd-bdb5-44ac-bfa4-6beb07f2f882"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5627abcd-bdb5-44ac-bfa4-6beb07f2f882",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fd61d6c7-dcc5-431d-b16e-01b61d53c92b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd61d6c7-dcc5-431d-b16e-01b61d53c92b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a7ca2b4f-199d-4782-8d57-b53614d0bb4b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a7ca2b4f-199d-4782-8d57-b53614d0bb4b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:23.503 23:42:55 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:23.503 23:42:55 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:23.503 23:42:55 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:23.503 23:42:55 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59820 00:05:23.503 23:42:55 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59820 ']' 00:05:23.503 23:42:55 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59820 00:05:23.503 23:42:55 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:23.503 23:42:55 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.503 23:42:55 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59820 00:05:23.503 killing process with pid 59820 00:05:23.503 23:42:56 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.503 23:42:56 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.503 23:42:56 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59820' 00:05:23.503 23:42:56 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59820 00:05:23.503 23:42:56 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59820 00:05:24.875 23:42:57 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:24.875 23:42:57 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:24.875 23:42:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:24.875 23:42:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.875 23:42:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:24.876 ************************************ 00:05:24.876 START TEST bdev_hello_world 00:05:24.876 ************************************ 00:05:24.876 23:42:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:24.876 [2024-12-05 23:42:57.269391] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:24.876 [2024-12-05 23:42:57.269934] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59899 ] 00:05:24.876 [2024-12-05 23:42:57.418487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.876 [2024-12-05 23:42:57.501296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.445 [2024-12-05 23:42:58.002040] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:25.445 [2024-12-05 23:42:58.002093] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:25.445 [2024-12-05 23:42:58.002116] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:25.445 [2024-12-05 23:42:58.004592] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:25.445 [2024-12-05 23:42:58.005226] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:25.445 [2024-12-05 23:42:58.005267] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:25.445 [2024-12-05 23:42:58.005510] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:25.445 00:05:25.445 [2024-12-05 23:42:58.005540] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:26.381 ************************************ 00:05:26.381 END TEST bdev_hello_world 00:05:26.381 ************************************ 00:05:26.381 00:05:26.381 real 0m1.522s 00:05:26.381 user 0m1.257s 00:05:26.381 sys 0m0.157s 00:05:26.381 23:42:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.381 23:42:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 23:42:58 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:05:26.381 23:42:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:26.381 23:42:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.381 23:42:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:26.381 ************************************ 00:05:26.381 START TEST bdev_bounds 00:05:26.381 ************************************ 00:05:26.381 Process bdevio pid: 59935 00:05:26.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.381 23:42:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:26.381 23:42:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59935 00:05:26.381 23:42:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:26.381 23:42:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.381 23:42:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59935' 00:05:26.382 23:42:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59935 00:05:26.382 23:42:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59935 ']' 00:05:26.382 23:42:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.382 23:42:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.382 23:42:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.382 23:42:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.382 23:42:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:26.382 [2024-12-05 23:42:58.858191] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:26.382 [2024-12-05 23:42:58.858865] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59935 ] 00:05:26.382 [2024-12-05 23:42:59.033435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.662 [2024-12-05 23:42:59.137956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.662 [2024-12-05 23:42:59.138159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.662 [2024-12-05 23:42:59.138282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.230 23:42:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.230 23:42:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:27.230 23:42:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:27.230 I/O targets: 00:05:27.230 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:27.230 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:27.230 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:27.230 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:27.230 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:27.230 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:27.230 00:05:27.230 00:05:27.230 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.230 http://cunit.sourceforge.net/ 00:05:27.230 00:05:27.230 00:05:27.230 Suite: bdevio tests on: Nvme3n1 00:05:27.230 Test: blockdev write read block ...passed 00:05:27.230 Test: blockdev write zeroes read block ...passed 00:05:27.230 Test: blockdev write zeroes read no split ...passed 00:05:27.230 Test: blockdev write zeroes read split ...passed 00:05:27.230 Test: blockdev write zeroes read split partial ...passed 00:05:27.230 Test: blockdev reset ...[2024-12-05 23:42:59.855747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:27.230 [2024-12-05 23:42:59.858591] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:27.230 passed 00:05:27.230 Test: blockdev write read 8 blocks ...passed 00:05:27.230 Test: blockdev write read size > 128k ...passed 00:05:27.230 Test: blockdev write read invalid size ...passed 00:05:27.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:27.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:27.230 Test: blockdev write read max offset ...passed 00:05:27.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:27.230 Test: blockdev writev readv 8 blocks ...passed 00:05:27.230 Test: blockdev writev readv 30 x 1block ...passed 00:05:27.230 Test: blockdev writev readv block ...passed 00:05:27.230 Test: blockdev writev readv size > 128k ...passed 00:05:27.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:27.230 Test: blockdev comparev and writev ...[2024-12-05 23:42:59.866725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b860a000 len:0x1000 00:05:27.230 [2024-12-05 23:42:59.866773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:27.230 passed 00:05:27.230 Test: blockdev nvme passthru rw ...passed 00:05:27.230 Test: blockdev nvme passthru vendor specific ...passed 00:05:27.231 Test: blockdev nvme admin passthru ...[2024-12-05 23:42:59.867543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:27.231 [2024-12-05 23:42:59.867578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:27.231 passed 00:05:27.231 Test: blockdev copy ...passed 00:05:27.231 Suite: bdevio tests on: Nvme2n3 00:05:27.231 Test: blockdev write read block ...passed 00:05:27.231 Test: blockdev write zeroes read block ...passed 00:05:27.231 Test: blockdev write zeroes read no split ...passed 00:05:27.231 Test: blockdev write zeroes read split ...passed 00:05:27.231 Test: blockdev write zeroes read split partial ...passed 00:05:27.231 Test: blockdev reset ...[2024-12-05 23:42:59.922621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:27.231 [2024-12-05 23:42:59.925702] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:27.231 passed 00:05:27.231 Test: blockdev write read 8 blocks ...passed 00:05:27.231 Test: blockdev write read size > 128k ...passed 00:05:27.231 Test: blockdev write read invalid size ...passed 00:05:27.231 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:27.231 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:27.231 Test: blockdev write read max offset ...passed 00:05:27.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:27.231 Test: blockdev writev readv 8 blocks ...passed 00:05:27.231 Test: blockdev writev readv 30 x 1block ...passed 00:05:27.231 Test: blockdev writev readv block ...passed 00:05:27.231 Test: blockdev writev readv size > 128k ...passed 00:05:27.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:27.231 Test: blockdev comparev and writev ...[2024-12-05 23:42:59.933437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29b006000 len:0x1000 00:05:27.231 [2024-12-05 23:42:59.933477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:27.231 passed 00:05:27.231 Test: blockdev nvme passthru rw ...passed 00:05:27.231 Test: blockdev nvme passthru vendor specific ...passed 00:05:27.231 Test: blockdev nvme admin passthru ...[2024-12-05 23:42:59.934194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:27.231 [2024-12-05 23:42:59.934225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:27.491 passed 00:05:27.491 Test: blockdev copy ...passed 00:05:27.491 Suite: bdevio tests on: Nvme2n2 00:05:27.491 Test: blockdev write read block ...passed 00:05:27.491 Test: blockdev write zeroes read block ...passed 00:05:27.491 Test: blockdev write zeroes read no split ...passed 00:05:27.491 Test: blockdev write zeroes read split ...passed 00:05:27.491 Test: blockdev write zeroes read split partial ...passed 00:05:27.491 Test: blockdev reset ...[2024-12-05 23:42:59.991153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:27.491 [2024-12-05 23:42:59.994133] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:27.491 passed 00:05:27.491 Test: blockdev write read 8 blocks ...passed 00:05:27.491 Test: blockdev write read size > 128k ...passed 00:05:27.491 Test: blockdev write read invalid size ...passed 00:05:27.491 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:27.491 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:27.491 Test: blockdev write read max offset ...passed 00:05:27.491 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:27.491 Test: blockdev writev readv 8 blocks ...passed 00:05:27.491 Test: blockdev writev readv 30 x 1block ...passed 00:05:27.491 Test: blockdev writev readv block ...passed 00:05:27.491 Test: blockdev writev readv size > 128k ...passed 00:05:27.491 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:27.491 Test: blockdev comparev and writev ...[2024-12-05 23:43:00.001390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c863c000 len:0x1000 00:05:27.491 [2024-12-05 23:43:00.001437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:27.491 passed 00:05:27.491 Test: blockdev nvme passthru rw ...passed 00:05:27.491 Test: blockdev nvme passthru vendor specific ...passed 00:05:27.491 Test: blockdev nvme admin passthru ...[2024-12-05 23:43:00.002210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:27.491 [2024-12-05 23:43:00.002248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:27.491 passed 00:05:27.491 Test: blockdev copy ...passed 00:05:27.491 Suite: bdevio tests on: Nvme2n1 00:05:27.491 Test: blockdev write read block ...passed 00:05:27.491 Test: blockdev write zeroes read block ...passed 00:05:27.491 Test: blockdev write zeroes read no split ...passed 00:05:27.491 Test: blockdev write zeroes read split ...passed 00:05:27.491 Test: blockdev write zeroes read split partial ...passed 00:05:27.491 Test: blockdev reset ...[2024-12-05 23:43:00.057622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:27.492 [2024-12-05 23:43:00.060578] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:27.492 Test: blockdev write read 8 blocks ...passed 00:05:27.492 Test: blockdev write read size > 128k ...uccessful. 00:05:27.492 passed 00:05:27.492 Test: blockdev write read invalid size ...passed 00:05:27.492 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:27.492 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:27.492 Test: blockdev write read max offset ...passed 00:05:27.492 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:27.492 Test: blockdev writev readv 8 blocks ...passed 00:05:27.492 Test: blockdev writev readv 30 x 1block ...passed 00:05:27.492 Test: blockdev writev readv block ...passed 00:05:27.492 Test: blockdev writev readv size > 128k ...passed 00:05:27.492 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:27.492 Test: blockdev comparev and writev ...[2024-12-05 23:43:00.066757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:05:27.492 Test: blockdev nvme passthru rw ...passed 00:05:27.492 Test: blockdev nvme passthru vendor specific ...passed 00:05:27.492 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2c8638000 len:0x1000 00:05:27.492 [2024-12-05 23:43:00.066948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:27.492 [2024-12-05 23:43:00.067421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:27.492 [2024-12-05 23:43:00.067446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:27.492 passed 00:05:27.492 Test: blockdev copy ...passed 00:05:27.492 Suite: bdevio tests on: Nvme1n1 00:05:27.492 Test: blockdev write read block ...passed 00:05:27.492 Test: blockdev write zeroes read block ...passed 00:05:27.492 Test: blockdev write zeroes read no split ...passed 00:05:27.492 Test: blockdev write zeroes read split ...passed 00:05:27.492 Test: blockdev write zeroes read split partial ...passed 00:05:27.492 Test: blockdev reset ...[2024-12-05 23:43:00.117048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:27.492 [2024-12-05 23:43:00.119711] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:27.492 passed 00:05:27.492 Test: blockdev write read 8 blocks ...passed 00:05:27.492 Test: blockdev write read size > 128k ...passed 00:05:27.492 Test: blockdev write read invalid size ...passed 00:05:27.492 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:27.492 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:27.492 Test: blockdev write read max offset ...passed 00:05:27.492 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:27.492 Test: blockdev writev readv 8 blocks ...passed 00:05:27.492 Test: blockdev writev readv 30 x 1block ...passed 00:05:27.492 Test: blockdev writev readv block ...passed 00:05:27.492 Test: blockdev writev readv size > 128k ...passed 00:05:27.492 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:27.492 Test: blockdev comparev and writev ...[2024-12-05 23:43:00.126943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:05:27.492 Test: blockdev nvme passthru rw ...passed 00:05:27.492 Test: blockdev nvme passthru vendor specific ...passed 00:05:27.492 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2c8634000 len:0x1000 00:05:27.492 [2024-12-05 23:43:00.127162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:27.492 [2024-12-05 23:43:00.127688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:27.492 [2024-12-05 23:43:00.127714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:27.492 passed 00:05:27.492 Test: blockdev copy ...passed 00:05:27.492 Suite: bdevio tests on: Nvme0n1 00:05:27.492 Test: blockdev write read block ...passed 00:05:27.492 Test: blockdev write zeroes read block ...passed 00:05:27.492 Test: blockdev write zeroes read no split ...passed 00:05:27.492 Test: blockdev write zeroes read split ...passed 00:05:27.492 Test: blockdev write zeroes read split partial ...passed 00:05:27.492 Test: blockdev reset ...[2024-12-05 23:43:00.185820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:27.492 [2024-12-05 23:43:00.188501] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:27.492 passed 00:05:27.492 Test: blockdev write read 8 blocks ...passed 00:05:27.492 Test: blockdev write read size > 128k ...passed 00:05:27.492 Test: blockdev write read invalid size ...passed 00:05:27.492 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:27.492 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:27.492 Test: blockdev write read max offset ...passed 00:05:27.492 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:27.492 Test: blockdev writev readv 8 blocks ...passed 00:05:27.492 Test: blockdev writev readv 30 x 1block ...passed 00:05:27.492 Test: blockdev writev readv block ...passed 00:05:27.492 Test: blockdev writev readv size > 128k ...passed 00:05:27.492 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:27.492 Test: blockdev comparev and writev ...[2024-12-05 23:43:00.196142] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassed 00:05:27.492 Test: blockdev nvme passthru rw ...ince it has 00:05:27.492 separate metadata which is not supported yet. 00:05:27.492 passed 00:05:27.492 Test: blockdev nvme passthru vendor specific ...passed 00:05:27.492 Test: blockdev nvme admin passthru ...[2024-12-05 23:43:00.196754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:27.492 [2024-12-05 23:43:00.196798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:27.751 passed 00:05:27.751 Test: blockdev copy ...passed 00:05:27.751 00:05:27.751 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.751 suites 6 6 n/a 0 0 00:05:27.751 tests 138 138 138 0 0 00:05:27.751 asserts 893 893 893 0 n/a 00:05:27.751 00:05:27.751 Elapsed time = 1.051 seconds 00:05:27.751 0 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59935 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59935 ']' 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59935 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59935 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59935' 00:05:27.751 killing process with pid 59935 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59935 00:05:27.751 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59935 00:05:28.317 23:43:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:28.317 00:05:28.317 real 0m2.146s 00:05:28.317 user 0m5.334s 00:05:28.317 sys 0m0.323s 00:05:28.317 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.317 23:43:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:28.317 ************************************ 00:05:28.317 END TEST bdev_bounds 00:05:28.317 ************************************ 00:05:28.318 23:43:00 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:28.318 23:43:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:28.318 23:43:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.318 23:43:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:28.318 ************************************ 00:05:28.318 START TEST bdev_nbd 00:05:28.318 ************************************ 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59995 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59995 /var/tmp/spdk-nbd.sock 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59995 ']' 00:05:28.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:28.318 23:43:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:28.590 [2024-12-05 23:43:01.032625] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:28.590 [2024-12-05 23:43:01.032722] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:28.590 [2024-12-05 23:43:01.190033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.590 [2024-12-05 23:43:01.293733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:29.523 23:43:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:29.523 1+0 records in 00:05:29.523 1+0 records out 00:05:29.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059424 s, 6.9 MB/s 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:29.523 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:29.781 1+0 records in 00:05:29.781 1+0 records out 00:05:29.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491262 s, 8.3 MB/s 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:29.781 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:30.039 1+0 records in 00:05:30.039 1+0 records out 00:05:30.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576002 s, 7.1 MB/s 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:30.039 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:30.298 1+0 records in 00:05:30.298 1+0 records out 00:05:30.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418715 s, 9.8 MB/s 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:30.298 23:43:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:30.557 1+0 records in 00:05:30.557 1+0 records out 00:05:30.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498229 s, 8.2 MB/s 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:30.557 1+0 records in 00:05:30.557 1+0 records out 00:05:30.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587451 s, 7.0 MB/s 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:30.557 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd0", 00:05:30.816 "bdev_name": "Nvme0n1" 00:05:30.816 }, 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd1", 00:05:30.816 "bdev_name": "Nvme1n1" 00:05:30.816 }, 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd2", 00:05:30.816 "bdev_name": "Nvme2n1" 00:05:30.816 }, 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd3", 00:05:30.816 "bdev_name": "Nvme2n2" 00:05:30.816 }, 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd4", 00:05:30.816 "bdev_name": "Nvme2n3" 00:05:30.816 }, 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd5", 00:05:30.816 "bdev_name": "Nvme3n1" 00:05:30.816 } 00:05:30.816 ]' 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd0", 00:05:30.816 "bdev_name": "Nvme0n1" 00:05:30.816 }, 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd1", 00:05:30.816 "bdev_name": "Nvme1n1" 00:05:30.816 }, 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd2", 00:05:30.816 "bdev_name": "Nvme2n1" 00:05:30.816 }, 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd3", 00:05:30.816 "bdev_name": "Nvme2n2" 00:05:30.816 }, 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd4", 00:05:30.816 "bdev_name": "Nvme2n3" 00:05:30.816 }, 00:05:30.816 { 00:05:30.816 "nbd_device": "/dev/nbd5", 00:05:30.816 "bdev_name": "Nvme3n1" 00:05:30.816 } 00:05:30.816 ]' 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.816 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.075 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.075 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.075 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.075 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.075 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.075 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.075 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:31.075 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.075 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.075 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.335 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.336 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.336 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.336 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.336 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.336 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.336 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:31.336 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.336 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.336 23:43:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:31.689 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:31.689 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:31.689 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.690 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:31.952 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:31.952 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:31.952 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:31.952 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.952 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.952 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:31.952 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:31.952 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.952 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.953 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.212 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:32.470 23:43:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:32.470 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:32.470 23:43:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:32.470 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.470 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:32.470 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:32.471 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:32.728 /dev/nbd0 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:32.728 1+0 records in 00:05:32.728 1+0 records out 00:05:32.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291558 s, 14.0 MB/s 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:32.728 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:32.986 /dev/nbd1 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:32.986 1+0 records in 00:05:32.986 1+0 records out 00:05:32.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051569 s, 7.9 MB/s 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:32.986 /dev/nbd10 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.986 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:33.244 1+0 records in 00:05:33.244 1+0 records out 00:05:33.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548722 s, 7.5 MB/s 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:33.244 /dev/nbd11 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:33.244 1+0 records in 00:05:33.244 1+0 records out 00:05:33.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589083 s, 7.0 MB/s 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:33.244 23:43:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:33.502 /dev/nbd12 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:33.502 1+0 records in 00:05:33.502 1+0 records out 00:05:33.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394937 s, 10.4 MB/s 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:33.502 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:33.759 /dev/nbd13 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:33.759 1+0 records in 00:05:33.759 1+0 records out 00:05:33.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687233 s, 6.0 MB/s 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.759 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.016 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd0", 00:05:34.016 "bdev_name": "Nvme0n1" 00:05:34.016 }, 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd1", 00:05:34.016 "bdev_name": "Nvme1n1" 00:05:34.016 }, 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd10", 00:05:34.016 "bdev_name": "Nvme2n1" 00:05:34.016 }, 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd11", 00:05:34.016 "bdev_name": "Nvme2n2" 00:05:34.016 }, 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd12", 00:05:34.016 "bdev_name": "Nvme2n3" 00:05:34.016 }, 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd13", 00:05:34.016 "bdev_name": "Nvme3n1" 00:05:34.016 } 00:05:34.016 ]' 00:05:34.016 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd0", 00:05:34.016 "bdev_name": "Nvme0n1" 00:05:34.016 }, 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd1", 00:05:34.016 "bdev_name": "Nvme1n1" 00:05:34.016 }, 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd10", 00:05:34.016 "bdev_name": "Nvme2n1" 00:05:34.016 }, 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd11", 00:05:34.016 "bdev_name": "Nvme2n2" 00:05:34.016 }, 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd12", 00:05:34.016 "bdev_name": "Nvme2n3" 00:05:34.016 }, 00:05:34.016 { 00:05:34.016 "nbd_device": "/dev/nbd13", 00:05:34.016 "bdev_name": "Nvme3n1" 00:05:34.016 } 00:05:34.016 ]' 00:05:34.016 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.016 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.016 /dev/nbd1 00:05:34.016 /dev/nbd10 00:05:34.016 /dev/nbd11 00:05:34.016 /dev/nbd12 00:05:34.016 /dev/nbd13' 00:05:34.016 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.016 /dev/nbd1 00:05:34.016 /dev/nbd10 00:05:34.016 /dev/nbd11 00:05:34.016 /dev/nbd12 00:05:34.016 /dev/nbd13' 00:05:34.016 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.016 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:34.017 256+0 records in 00:05:34.017 256+0 records out 00:05:34.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593397 s, 177 MB/s 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.017 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.275 256+0 records in 00:05:34.275 256+0 records out 00:05:34.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0535708 s, 19.6 MB/s 00:05:34.275 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.275 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.275 256+0 records in 00:05:34.275 256+0 records out 00:05:34.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0524832 s, 20.0 MB/s 00:05:34.275 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.275 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:34.275 256+0 records in 00:05:34.275 256+0 records out 00:05:34.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0533796 s, 19.6 MB/s 00:05:34.275 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.275 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:34.275 256+0 records in 00:05:34.275 256+0 records out 00:05:34.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0547895 s, 19.1 MB/s 00:05:34.275 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.275 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:34.275 256+0 records in 00:05:34.275 256+0 records out 00:05:34.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0564957 s, 18.6 MB/s 00:05:34.275 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.275 23:43:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:34.534 256+0 records in 00:05:34.534 256+0 records out 00:05:34.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0526459 s, 19.9 MB/s 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.534 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.792 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:35.051 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:35.051 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:35.051 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:35.051 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.051 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.051 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:35.051 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:35.051 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.051 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.051 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:35.308 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:35.308 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:35.308 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:35.308 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.308 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.308 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:35.308 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:35.308 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.308 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.308 23:43:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:35.567 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:35.567 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:35.567 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:35.567 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.567 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.567 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:35.567 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:35.567 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.567 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.567 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.826 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:36.087 malloc_lvol_verify 00:05:36.087 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:36.346 ca9f02b1-cc68-401e-aa01-717d4c35afa8 00:05:36.346 23:43:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:36.604 4ee733a1-06ae-4336-aee3-7541492fb517 00:05:36.605 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:36.862 /dev/nbd0 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:36.862 mke2fs 1.47.0 (5-Feb-2023) 00:05:36.862 Discarding device blocks: 0/4096 done 00:05:36.862 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:36.862 00:05:36.862 Allocating group tables: 0/1 done 00:05:36.862 Writing inode tables: 0/1 done 00:05:36.862 Creating journal (1024 blocks): done 00:05:36.862 Writing superblocks and filesystem accounting information: 0/1 done 00:05:36.862 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.862 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.863 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.863 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.863 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.863 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.863 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.863 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.129 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:37.129 23:43:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.129 23:43:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59995 00:05:37.129 23:43:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59995 ']' 00:05:37.129 23:43:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59995 00:05:37.129 23:43:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:05:37.129 23:43:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.129 23:43:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59995 00:05:37.130 23:43:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.130 23:43:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.130 killing process with pid 59995 00:05:37.130 23:43:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59995' 00:05:37.130 23:43:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59995 00:05:37.130 23:43:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59995 00:05:37.728 23:43:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:37.728 00:05:37.728 real 0m9.321s 00:05:37.728 user 0m13.586s 00:05:37.728 sys 0m2.879s 00:05:37.728 23:43:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.728 23:43:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:37.728 ************************************ 00:05:37.728 END TEST bdev_nbd 00:05:37.728 ************************************ 00:05:37.728 23:43:10 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:05:37.728 23:43:10 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:05:37.728 skipping fio tests on NVMe due to multi-ns failures. 00:05:37.728 23:43:10 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:37.728 23:43:10 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:37.728 23:43:10 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:37.728 23:43:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:05:37.728 23:43:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.728 23:43:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:37.728 ************************************ 00:05:37.728 START TEST bdev_verify 00:05:37.728 ************************************ 00:05:37.728 23:43:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:37.728 [2024-12-05 23:43:10.395745] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:37.728 [2024-12-05 23:43:10.395882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60362 ] 00:05:37.986 [2024-12-05 23:43:10.564880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.986 [2024-12-05 23:43:10.673993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.986 [2024-12-05 23:43:10.674013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.920 Running I/O for 5 seconds... 00:05:40.783 22016.00 IOPS, 86.00 MiB/s [2024-12-05T23:43:14.893Z] 23552.00 IOPS, 92.00 MiB/s [2024-12-05T23:43:15.825Z] 24042.67 IOPS, 93.92 MiB/s [2024-12-05T23:43:16.758Z] 24128.00 IOPS, 94.25 MiB/s [2024-12-05T23:43:16.758Z] 24012.80 IOPS, 93.80 MiB/s 00:05:44.049 Latency(us) 00:05:44.049 [2024-12-05T23:43:16.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:44.049 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0x0 length 0xbd0bd 00:05:44.049 Nvme0n1 : 5.04 1981.96 7.74 0.00 0.00 64300.59 11040.30 79853.10 00:05:44.049 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:05:44.049 Nvme0n1 : 5.06 1971.66 7.70 0.00 0.00 64751.81 11342.77 76223.41 00:05:44.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0x0 length 0xa0000 00:05:44.049 Nvme1n1 : 5.06 1985.51 7.76 0.00 0.00 64051.10 6805.66 74206.92 00:05:44.049 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0xa0000 length 0xa0000 00:05:44.049 Nvme1n1 : 5.07 1970.51 7.70 0.00 0.00 64631.86 13812.97 70173.93 00:05:44.049 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0x0 length 0x80000 00:05:44.049 Nvme2n1 : 5.07 1993.71 7.79 0.00 0.00 63779.41 8822.15 68964.04 00:05:44.049 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0x80000 length 0x80000 00:05:44.049 Nvme2n1 : 5.07 1970.00 7.70 0.00 0.00 64507.93 13107.20 66947.54 00:05:44.049 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0x0 length 0x80000 00:05:44.049 Nvme2n2 : 5.07 1993.19 7.79 0.00 0.00 63672.85 8973.39 67754.14 00:05:44.049 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0x80000 length 0x80000 00:05:44.049 Nvme2n2 : 5.07 1969.47 7.69 0.00 0.00 64350.34 12502.25 65737.65 00:05:44.049 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0x0 length 0x80000 00:05:44.049 Nvme2n3 : 5.07 1992.64 7.78 0.00 0.00 63550.04 9275.86 74206.92 00:05:44.049 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0x80000 length 0x80000 00:05:44.049 Nvme2n3 : 5.07 1968.93 7.69 0.00 0.00 64218.07 11796.48 68560.74 00:05:44.049 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0x0 length 0x20000 00:05:44.049 Nvme3n1 : 5.08 1992.09 7.78 0.00 0.00 63424.01 9074.22 80256.39 00:05:44.049 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.049 Verification LBA range: start 0x20000 length 0x20000 00:05:44.049 Nvme3n1 : 5.08 1978.82 7.73 0.00 0.00 63797.75 2369.38 73803.62 00:05:44.049 [2024-12-05T23:43:16.758Z] =================================================================================================================== 00:05:44.049 [2024-12-05T23:43:16.758Z] Total : 23768.50 92.85 0.00 0.00 64084.10 2369.38 80256.39 00:05:44.984 00:05:44.984 real 0m7.272s 00:05:44.984 user 0m13.559s 00:05:44.984 sys 0m0.260s 00:05:44.984 23:43:17 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.984 23:43:17 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:44.984 ************************************ 00:05:44.984 END TEST bdev_verify 00:05:44.984 ************************************ 00:05:44.984 23:43:17 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:44.984 23:43:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:05:44.984 23:43:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.984 23:43:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:44.984 ************************************ 00:05:44.984 START TEST bdev_verify_big_io 00:05:44.984 ************************************ 00:05:44.984 23:43:17 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:45.243 [2024-12-05 23:43:17.707189] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:45.243 [2024-12-05 23:43:17.707307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60460 ] 00:05:45.243 [2024-12-05 23:43:17.870744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.501 [2024-12-05 23:43:17.981374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.501 [2024-12-05 23:43:17.981379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.068 Running I/O for 5 seconds... 00:05:51.250 952.00 IOPS, 59.50 MiB/s [2024-12-05T23:43:24.891Z] 2078.00 IOPS, 129.88 MiB/s [2024-12-05T23:43:24.891Z] 2624.67 IOPS, 164.04 MiB/s 00:05:52.182 Latency(us) 00:05:52.182 [2024-12-05T23:43:24.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:52.182 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:52.182 Verification LBA range: start 0x0 length 0xbd0b 00:05:52.182 Nvme0n1 : 5.66 114.28 7.14 0.00 0.00 1044817.98 14317.10 1071160.71 00:05:52.182 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:52.182 Verification LBA range: start 0xbd0b length 0xbd0b 00:05:52.182 Nvme0n1 : 5.67 109.93 6.87 0.00 0.00 1123726.95 8670.92 1690627.15 00:05:52.182 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:52.182 Verification LBA range: start 0x0 length 0xa000 00:05:52.182 Nvme1n1 : 5.76 122.22 7.64 0.00 0.00 976222.88 70173.93 942105.21 00:05:52.182 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:52.182 Verification LBA range: start 0xa000 length 0xa000 00:05:52.182 Nvme1n1 : 5.67 109.87 6.87 0.00 0.00 1084558.25 30852.33 1729343.80 00:05:52.182 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:52.182 Verification LBA range: start 0x0 length 0x8000 00:05:52.182 Nvme2n1 : 5.86 127.76 7.98 0.00 0.00 913834.47 62107.96 1006632.96 00:05:52.182 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:52.182 Verification LBA range: start 0x8000 length 0x8000 00:05:52.182 Nvme2n1 : 5.83 114.09 7.13 0.00 0.00 1006399.47 46984.27 1768060.46 00:05:52.182 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:52.182 Verification LBA range: start 0x0 length 0x8000 00:05:52.183 Nvme2n2 : 5.89 130.88 8.18 0.00 0.00 865036.88 37708.41 1006632.96 00:05:52.183 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:52.183 Verification LBA range: start 0x8000 length 0x8000 00:05:52.183 Nvme2n2 : 5.87 117.52 7.35 0.00 0.00 946043.80 68560.74 1793871.56 00:05:52.183 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:52.183 Verification LBA range: start 0x0 length 0x8000 00:05:52.183 Nvme2n3 : 5.89 135.56 8.47 0.00 0.00 811274.10 26214.40 1019538.51 00:05:52.183 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:52.183 Verification LBA range: start 0x8000 length 0x8000 00:05:52.183 Nvme2n3 : 5.93 136.89 8.56 0.00 0.00 792081.90 35490.26 1387346.71 00:05:52.183 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:52.183 Verification LBA range: start 0x0 length 0x2000 00:05:52.183 Nvme3n1 : 5.94 150.86 9.43 0.00 0.00 707043.57 623.85 1025991.29 00:05:52.183 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:52.183 Verification LBA range: start 0x2000 length 0x2000 00:05:52.183 Nvme3n1 : 5.99 168.53 10.53 0.00 0.00 625226.66 112.64 1651910.50 00:05:52.183 [2024-12-05T23:43:24.892Z] =================================================================================================================== 00:05:52.183 [2024-12-05T23:43:24.892Z] Total : 1538.40 96.15 0.00 0.00 886536.98 112.64 1793871.56 00:05:53.553 00:05:53.553 real 0m8.522s 00:05:53.553 user 0m16.092s 00:05:53.553 sys 0m0.270s 00:05:53.553 23:43:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.553 ************************************ 00:05:53.553 END TEST bdev_verify_big_io 00:05:53.553 ************************************ 00:05:53.553 23:43:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:05:53.553 23:43:26 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:53.553 23:43:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:05:53.553 23:43:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.553 23:43:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.553 ************************************ 00:05:53.553 START TEST bdev_write_zeroes 00:05:53.553 ************************************ 00:05:53.553 23:43:26 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:53.811 [2024-12-05 23:43:26.271355] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:53.811 [2024-12-05 23:43:26.271474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60569 ] 00:05:53.811 [2024-12-05 23:43:26.429808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.068 [2024-12-05 23:43:26.521117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.633 Running I/O for 1 seconds... 00:05:55.564 73280.00 IOPS, 286.25 MiB/s 00:05:55.564 Latency(us) 00:05:55.564 [2024-12-05T23:43:28.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:55.565 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:55.565 Nvme0n1 : 1.02 12167.04 47.53 0.00 0.00 10500.38 8065.97 20467.40 00:05:55.565 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:55.565 Nvme1n1 : 1.02 12153.00 47.47 0.00 0.00 10500.55 8318.03 20769.87 00:05:55.565 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:55.565 Nvme2n1 : 1.02 12139.15 47.42 0.00 0.00 10490.29 8065.97 19559.98 00:05:55.565 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:55.565 Nvme2n2 : 1.02 12125.27 47.36 0.00 0.00 10483.97 8368.44 19559.98 00:05:55.565 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:55.565 Nvme2n3 : 1.03 12111.45 47.31 0.00 0.00 10448.72 8166.79 18047.61 00:05:55.565 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:55.565 Nvme3n1 : 1.03 12035.46 47.01 0.00 0.00 10498.11 8065.97 25407.80 00:05:55.565 [2024-12-05T23:43:28.274Z] =================================================================================================================== 00:05:55.565 [2024-12-05T23:43:28.274Z] Total : 72731.36 284.11 0.00 0.00 10486.99 8065.97 25407.80 00:05:56.499 00:05:56.499 real 0m2.709s 00:05:56.499 user 0m2.407s 00:05:56.499 sys 0m0.188s 00:05:56.499 23:43:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.499 ************************************ 00:05:56.499 END TEST bdev_write_zeroes 00:05:56.499 23:43:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:05:56.499 ************************************ 00:05:56.499 23:43:28 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:56.499 23:43:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:05:56.499 23:43:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.499 23:43:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:56.499 ************************************ 00:05:56.499 START TEST bdev_json_nonenclosed 00:05:56.499 ************************************ 00:05:56.499 23:43:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:56.499 [2024-12-05 23:43:29.015443] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:56.499 [2024-12-05 23:43:29.015558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60622 ] 00:05:56.499 [2024-12-05 23:43:29.177580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.757 [2024-12-05 23:43:29.284958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.757 [2024-12-05 23:43:29.285044] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:05:56.758 [2024-12-05 23:43:29.285060] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:56.758 [2024-12-05 23:43:29.285069] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.016 ************************************ 00:05:57.016 END TEST bdev_json_nonenclosed 00:05:57.016 00:05:57.016 real 0m0.521s 00:05:57.016 user 0m0.320s 00:05:57.016 sys 0m0.097s 00:05:57.016 23:43:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.016 23:43:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:05:57.016 ************************************ 00:05:57.016 23:43:29 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:57.016 23:43:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:05:57.016 23:43:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.016 23:43:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.016 ************************************ 00:05:57.016 START TEST bdev_json_nonarray 00:05:57.016 ************************************ 00:05:57.016 23:43:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:57.016 [2024-12-05 23:43:29.576566] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:57.016 [2024-12-05 23:43:29.576684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60642 ] 00:05:57.274 [2024-12-05 23:43:29.736346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.274 [2024-12-05 23:43:29.844093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.274 [2024-12-05 23:43:29.844174] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:05:57.274 [2024-12-05 23:43:29.844192] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:57.274 [2024-12-05 23:43:29.844204] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.533 ************************************ 00:05:57.533 END TEST bdev_json_nonarray 00:05:57.533 ************************************ 00:05:57.533 00:05:57.533 real 0m0.527s 00:05:57.533 user 0m0.312s 00:05:57.533 sys 0m0.111s 00:05:57.533 23:43:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.533 23:43:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:05:57.533 23:43:30 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:05:57.533 00:05:57.533 real 0m35.598s 00:05:57.533 user 0m55.735s 00:05:57.533 sys 0m4.962s 00:05:57.533 23:43:30 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.533 23:43:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.533 ************************************ 00:05:57.533 END TEST blockdev_nvme 00:05:57.533 ************************************ 00:05:57.533 23:43:30 -- spdk/autotest.sh@209 -- # uname -s 00:05:57.533 23:43:30 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:05:57.533 23:43:30 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:05:57.533 23:43:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:57.533 23:43:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.533 23:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:57.533 ************************************ 00:05:57.533 START TEST blockdev_nvme_gpt 00:05:57.533 ************************************ 00:05:57.533 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:05:57.533 * Looking for test storage... 00:05:57.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:57.533 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.533 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.533 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.791 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.791 23:43:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:05:57.791 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.791 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.791 --rc genhtml_branch_coverage=1 00:05:57.791 --rc genhtml_function_coverage=1 00:05:57.791 --rc genhtml_legend=1 00:05:57.791 --rc geninfo_all_blocks=1 00:05:57.791 --rc geninfo_unexecuted_blocks=1 00:05:57.791 00:05:57.791 ' 00:05:57.791 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.791 --rc genhtml_branch_coverage=1 00:05:57.791 --rc genhtml_function_coverage=1 00:05:57.791 --rc genhtml_legend=1 00:05:57.791 --rc geninfo_all_blocks=1 00:05:57.791 --rc geninfo_unexecuted_blocks=1 00:05:57.791 00:05:57.791 ' 00:05:57.791 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.791 --rc genhtml_branch_coverage=1 00:05:57.791 --rc genhtml_function_coverage=1 00:05:57.791 --rc genhtml_legend=1 00:05:57.791 --rc geninfo_all_blocks=1 00:05:57.791 --rc geninfo_unexecuted_blocks=1 00:05:57.791 00:05:57.791 ' 00:05:57.791 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.791 --rc genhtml_branch_coverage=1 00:05:57.791 --rc genhtml_function_coverage=1 00:05:57.791 --rc genhtml_legend=1 00:05:57.791 --rc geninfo_all_blocks=1 00:05:57.791 --rc geninfo_unexecuted_blocks=1 00:05:57.791 00:05:57.791 ' 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:05:57.791 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:57.792 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:57.792 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:57.792 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:05:57.792 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:05:57.792 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:57.792 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60728 00:05:57.792 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:57.792 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60728 00:05:57.792 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60728 ']' 00:05:57.792 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.792 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.792 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.792 23:43:30 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:57.792 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.792 23:43:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:05:57.792 [2024-12-05 23:43:30.350180] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:05:57.792 [2024-12-05 23:43:30.350303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60728 ] 00:05:58.049 [2024-12-05 23:43:30.510389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.049 [2024-12-05 23:43:30.619599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.633 23:43:31 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.633 23:43:31 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:05:58.633 23:43:31 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:58.633 23:43:31 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:05:58.633 23:43:31 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:58.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.148 Waiting for block devices as requested 00:05:59.148 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:59.148 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:59.148 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:59.404 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:04.716 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:04.716 23:43:36 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:04.716 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:04.716 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:04.716 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:04.716 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:04.716 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:04.716 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:04.716 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:04.716 BYT; 00:06:04.716 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:04.717 BYT; 00:06:04.717 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:04.717 23:43:37 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:04.717 23:43:37 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:05.650 The operation has completed successfully. 00:06:05.650 23:43:38 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:06.585 The operation has completed successfully. 00:06:06.585 23:43:39 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:06.843 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:07.411 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:07.411 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:07.411 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:07.411 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:07.411 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:07.411 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.411 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:07.411 [] 00:06:07.411 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.411 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:07.411 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:07.411 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:07.411 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:07.668 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:07.668 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.668 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.928 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.928 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:07.928 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.928 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.928 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.928 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:07.928 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:07.928 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:07.928 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.928 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:07.928 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:07.929 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "da2a2dc3-0d73-4583-ad41-f9bde63900c9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "da2a2dc3-0d73-4583-ad41-f9bde63900c9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ceb481fb-2b06-436e-aa0c-61bdd04151f5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ceb481fb-2b06-436e-aa0c-61bdd04151f5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "81551736-5bfe-418e-ad17-c65b1daba2ca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "81551736-5bfe-418e-ad17-c65b1daba2ca",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "903a2a15-3a40-4aec-aa76-3f3fcd2b24b6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "903a2a15-3a40-4aec-aa76-3f3fcd2b24b6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d2cda2c7-e89e-4271-8a90-2b0e041c13fd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d2cda2c7-e89e-4271-8a90-2b0e041c13fd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:07.929 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:07.929 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:07.929 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:07.929 23:43:40 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60728 00:06:07.929 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60728 ']' 00:06:07.929 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60728 00:06:07.929 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:07.929 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.929 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60728 00:06:07.929 killing process with pid 60728 00:06:07.929 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.929 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.929 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60728' 00:06:07.929 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60728 00:06:07.929 23:43:40 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60728 00:06:09.828 23:43:42 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:09.828 23:43:42 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:09.828 23:43:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:09.828 23:43:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.828 23:43:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:09.828 ************************************ 00:06:09.828 START TEST bdev_hello_world 00:06:09.828 ************************************ 00:06:09.828 23:43:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:09.828 [2024-12-05 23:43:42.121958] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:09.828 [2024-12-05 23:43:42.122108] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61353 ] 00:06:09.828 [2024-12-05 23:43:42.279163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.828 [2024-12-05 23:43:42.369057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.392 [2024-12-05 23:43:42.888259] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:10.392 [2024-12-05 23:43:42.888294] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:10.392 [2024-12-05 23:43:42.888311] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:10.392 [2024-12-05 23:43:42.890525] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:10.392 [2024-12-05 23:43:42.891054] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:10.392 [2024-12-05 23:43:42.891078] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:10.392 [2024-12-05 23:43:42.891310] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:10.392 00:06:10.392 [2024-12-05 23:43:42.891352] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:10.958 00:06:10.958 real 0m1.486s 00:06:10.958 user 0m1.183s 00:06:10.958 sys 0m0.197s 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:10.958 ************************************ 00:06:10.958 END TEST bdev_hello_world 00:06:10.958 ************************************ 00:06:10.958 23:43:43 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:10.958 23:43:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.958 23:43:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.958 23:43:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:10.958 ************************************ 00:06:10.958 START TEST bdev_bounds 00:06:10.958 ************************************ 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61389 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:10.958 Process bdevio pid: 61389 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61389' 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61389 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61389 ']' 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.958 23:43:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:10.958 [2024-12-05 23:43:43.645831] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:10.958 [2024-12-05 23:43:43.645950] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61389 ] 00:06:11.216 [2024-12-05 23:43:43.803748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.216 [2024-12-05 23:43:43.899295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.216 [2024-12-05 23:43:43.899392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.216 [2024-12-05 23:43:43.899605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.151 23:43:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.151 23:43:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:12.151 23:43:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:12.151 I/O targets: 00:06:12.151 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:12.151 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:12.151 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:12.151 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:12.151 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:12.151 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:12.151 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:12.151 00:06:12.151 00:06:12.151 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.151 http://cunit.sourceforge.net/ 00:06:12.151 00:06:12.151 00:06:12.151 Suite: bdevio tests on: Nvme3n1 00:06:12.151 Test: blockdev write read block ...passed 00:06:12.151 Test: blockdev write zeroes read block ...passed 00:06:12.151 Test: blockdev write zeroes read no split ...passed 00:06:12.151 Test: blockdev write zeroes read split ...passed 00:06:12.151 Test: blockdev write zeroes read split partial ...passed 00:06:12.151 Test: blockdev reset ...[2024-12-05 23:43:44.619080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:12.151 [2024-12-05 23:43:44.622512] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:12.151 passed 00:06:12.151 Test: blockdev write read 8 blocks ...passed 00:06:12.151 Test: blockdev write read size > 128k ...passed 00:06:12.151 Test: blockdev write read invalid size ...passed 00:06:12.151 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.151 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.151 Test: blockdev write read max offset ...passed 00:06:12.151 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.151 Test: blockdev writev readv 8 blocks ...passed 00:06:12.151 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.151 Test: blockdev writev readv block ...passed 00:06:12.151 Test: blockdev writev readv size > 128k ...passed 00:06:12.151 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.151 Test: blockdev comparev and writev ...[2024-12-05 23:43:44.633395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5e04000 len:0x1000 00:06:12.151 [2024-12-05 23:43:44.633501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.151 passed 00:06:12.151 Test: blockdev nvme passthru rw ...passed 00:06:12.151 Test: blockdev nvme passthru vendor specific ...[2024-12-05 23:43:44.634754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.151 [2024-12-05 23:43:44.634788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.151 passed 00:06:12.151 Test: blockdev nvme admin passthru ...passed 00:06:12.151 Test: blockdev copy ...passed 00:06:12.151 Suite: bdevio tests on: Nvme2n3 00:06:12.151 Test: blockdev write read block ...passed 00:06:12.151 Test: blockdev write zeroes read block ...passed 00:06:12.151 Test: blockdev write zeroes read no split ...passed 00:06:12.151 Test: blockdev write zeroes read split ...passed 00:06:12.151 Test: blockdev write zeroes read split partial ...passed 00:06:12.151 Test: blockdev reset ...[2024-12-05 23:43:44.682741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:12.151 [2024-12-05 23:43:44.687960] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:12.151 passed 00:06:12.151 Test: blockdev write read 8 blocks ...passed 00:06:12.151 Test: blockdev write read size > 128k ...passed 00:06:12.151 Test: blockdev write read invalid size ...passed 00:06:12.151 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.151 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.151 Test: blockdev write read max offset ...passed 00:06:12.151 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.151 Test: blockdev writev readv 8 blocks ...passed 00:06:12.151 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.151 Test: blockdev writev readv block ...passed 00:06:12.151 Test: blockdev writev readv size > 128k ...passed 00:06:12.151 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.151 Test: blockdev comparev and writev ...[2024-12-05 23:43:44.694726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5e02000 len:0x1000 00:06:12.151 [2024-12-05 23:43:44.694764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.151 passed 00:06:12.151 Test: blockdev nvme passthru rw ...passed 00:06:12.152 Test: blockdev nvme passthru vendor specific ...[2024-12-05 23:43:44.695389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.152 [2024-12-05 23:43:44.695413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.152 passed 00:06:12.152 Test: blockdev nvme admin passthru ...passed 00:06:12.152 Test: blockdev copy ...passed 00:06:12.152 Suite: bdevio tests on: Nvme2n2 00:06:12.152 Test: blockdev write read block ...passed 00:06:12.152 Test: blockdev write zeroes read block ...passed 00:06:12.152 Test: blockdev write zeroes read no split ...passed 00:06:12.152 Test: blockdev write zeroes read split ...passed 00:06:12.152 Test: blockdev write zeroes read split partial ...passed 00:06:12.152 Test: blockdev reset ...[2024-12-05 23:43:44.746180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:12.152 [2024-12-05 23:43:44.749330] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:12.152 passed 00:06:12.152 Test: blockdev write read 8 blocks ...passed 00:06:12.152 Test: blockdev write read size > 128k ...passed 00:06:12.152 Test: blockdev write read invalid size ...passed 00:06:12.152 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.152 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.152 Test: blockdev write read max offset ...passed 00:06:12.152 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.152 Test: blockdev writev readv 8 blocks ...passed 00:06:12.152 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.152 Test: blockdev writev readv block ...passed 00:06:12.152 Test: blockdev writev readv size > 128k ...passed 00:06:12.152 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.152 Test: blockdev comparev and writev ...[2024-12-05 23:43:44.756666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c38000 len:0x1000 00:06:12.152 [2024-12-05 23:43:44.756703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.152 passed 00:06:12.152 Test: blockdev nvme passthru rw ...passed 00:06:12.152 Test: blockdev nvme passthru vendor specific ...[2024-12-05 23:43:44.757547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.152 [2024-12-05 23:43:44.757650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.152 passed 00:06:12.152 Test: blockdev nvme admin passthru ...passed 00:06:12.152 Test: blockdev copy ...passed 00:06:12.152 Suite: bdevio tests on: Nvme2n1 00:06:12.152 Test: blockdev write read block ...passed 00:06:12.152 Test: blockdev write zeroes read block ...passed 00:06:12.152 Test: blockdev write zeroes read no split ...passed 00:06:12.152 Test: blockdev write zeroes read split ...passed 00:06:12.152 Test: blockdev write zeroes read split partial ...passed 00:06:12.152 Test: blockdev reset ...[2024-12-05 23:43:44.809960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:12.152 [2024-12-05 23:43:44.812952] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:12.152 passed 00:06:12.152 Test: blockdev write read 8 blocks ...passed 00:06:12.152 Test: blockdev write read size > 128k ...passed 00:06:12.152 Test: blockdev write read invalid size ...passed 00:06:12.152 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.152 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.152 Test: blockdev write read max offset ...passed 00:06:12.152 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.152 Test: blockdev writev readv 8 blocks ...passed 00:06:12.152 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.152 Test: blockdev writev readv block ...passed 00:06:12.152 Test: blockdev writev readv size > 128k ...passed 00:06:12.152 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.152 Test: blockdev comparev and writev ...[2024-12-05 23:43:44.818940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c34000 len:0x1000 00:06:12.152 [2024-12-05 23:43:44.818984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.152 passed 00:06:12.152 Test: blockdev nvme passthru rw ...passed 00:06:12.152 Test: blockdev nvme passthru vendor specific ...[2024-12-05 23:43:44.819698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.152 [2024-12-05 23:43:44.819806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.152 passed 00:06:12.152 Test: blockdev nvme admin passthru ...passed 00:06:12.152 Test: blockdev copy ...passed 00:06:12.152 Suite: bdevio tests on: Nvme1n1p2 00:06:12.152 Test: blockdev write read block ...passed 00:06:12.152 Test: blockdev write zeroes read block ...passed 00:06:12.152 Test: blockdev write zeroes read no split ...passed 00:06:12.152 Test: blockdev write zeroes read split ...passed 00:06:12.410 Test: blockdev write zeroes read split partial ...passed 00:06:12.410 Test: blockdev reset ...[2024-12-05 23:43:44.862334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:12.410 [2024-12-05 23:43:44.864822] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:12.410 passed 00:06:12.410 Test: blockdev write read 8 blocks ...passed 00:06:12.410 Test: blockdev write read size > 128k ...passed 00:06:12.410 Test: blockdev write read invalid size ...passed 00:06:12.410 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.410 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.410 Test: blockdev write read max offset ...passed 00:06:12.410 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.410 Test: blockdev writev readv 8 blocks ...passed 00:06:12.410 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.411 Test: blockdev writev readv block ...passed 00:06:12.411 Test: blockdev writev readv size > 128k ...passed 00:06:12.411 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.411 Test: blockdev comparev and writev ...[2024-12-05 23:43:44.871192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c9c30000 len:0x1000 00:06:12.411 [2024-12-05 23:43:44.871226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.411 passed 00:06:12.411 Test: blockdev nvme passthru rw ...passed 00:06:12.411 Test: blockdev nvme passthru vendor specific ...passed 00:06:12.411 Test: blockdev nvme admin passthru ...passed 00:06:12.411 Test: blockdev copy ...passed 00:06:12.411 Suite: bdevio tests on: Nvme1n1p1 00:06:12.411 Test: blockdev write read block ...passed 00:06:12.411 Test: blockdev write zeroes read block ...passed 00:06:12.411 Test: blockdev write zeroes read no split ...passed 00:06:12.411 Test: blockdev write zeroes read split ...passed 00:06:12.411 Test: blockdev write zeroes read split partial ...passed 00:06:12.411 Test: blockdev reset ...[2024-12-05 23:43:44.915276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:12.411 [2024-12-05 23:43:44.918143] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:12.411 passed 00:06:12.411 Test: blockdev write read 8 blocks ...passed 00:06:12.411 Test: blockdev write read size > 128k ...passed 00:06:12.411 Test: blockdev write read invalid size ...passed 00:06:12.411 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.411 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.411 Test: blockdev write read max offset ...passed 00:06:12.411 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.411 Test: blockdev writev readv 8 blocks ...passed 00:06:12.411 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.411 Test: blockdev writev readv block ...passed 00:06:12.411 Test: blockdev writev readv size > 128k ...passed 00:06:12.411 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.411 Test: blockdev comparev and writev ...[2024-12-05 23:43:44.924739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b600e000 len:0x1000 00:06:12.411 [2024-12-05 23:43:44.924775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.411 passed 00:06:12.411 Test: blockdev nvme passthru rw ...passed 00:06:12.411 Test: blockdev nvme passthru vendor specific ...passed 00:06:12.411 Test: blockdev nvme admin passthru ...passed 00:06:12.411 Test: blockdev copy ...passed 00:06:12.411 Suite: bdevio tests on: Nvme0n1 00:06:12.411 Test: blockdev write read block ...passed 00:06:12.411 Test: blockdev write zeroes read block ...passed 00:06:12.411 Test: blockdev write zeroes read no split ...passed 00:06:12.411 Test: blockdev write zeroes read split ...passed 00:06:12.411 Test: blockdev write zeroes read split partial ...passed 00:06:12.411 Test: blockdev reset ...[2024-12-05 23:43:44.969342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:12.411 [2024-12-05 23:43:44.972761] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:12.411 passed 00:06:12.411 Test: blockdev write read 8 blocks ...passed 00:06:12.411 Test: blockdev write read size > 128k ...passed 00:06:12.411 Test: blockdev write read invalid size ...passed 00:06:12.411 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.411 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.411 Test: blockdev write read max offset ...passed 00:06:12.411 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.411 Test: blockdev writev readv 8 blocks ...passed 00:06:12.411 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.411 Test: blockdev writev readv block ...passed 00:06:12.411 Test: blockdev writev readv size > 128k ...passed 00:06:12.411 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.411 Test: blockdev comparev and writev ...[2024-12-05 23:43:44.979020] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:12.411 separate metadata which is not supported yet. 00:06:12.411 passed 00:06:12.411 Test: blockdev nvme passthru rw ...passed 00:06:12.411 Test: blockdev nvme passthru vendor specific ...[2024-12-05 23:43:44.979488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:12.411 [2024-12-05 23:43:44.979522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:12.411 passed 00:06:12.411 Test: blockdev nvme admin passthru ...passed 00:06:12.411 Test: blockdev copy ...passed 00:06:12.411 00:06:12.411 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.411 suites 7 7 n/a 0 0 00:06:12.411 tests 161 161 161 0 0 00:06:12.411 asserts 1025 1025 1025 0 n/a 00:06:12.411 00:06:12.411 Elapsed time = 1.076 seconds 00:06:12.411 0 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61389 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61389 ']' 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61389 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61389 00:06:12.411 killing process with pid 61389 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61389' 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61389 00:06:12.411 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61389 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:13.343 00:06:13.343 real 0m2.189s 00:06:13.343 user 0m5.523s 00:06:13.343 sys 0m0.329s 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.343 ************************************ 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:13.343 END TEST bdev_bounds 00:06:13.343 ************************************ 00:06:13.343 23:43:45 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:13.343 23:43:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:13.343 23:43:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.343 23:43:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:13.343 ************************************ 00:06:13.343 START TEST bdev_nbd 00:06:13.343 ************************************ 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61443 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61443 /var/tmp/spdk-nbd.sock 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61443 ']' 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.343 23:43:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:13.343 [2024-12-05 23:43:45.883204] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:13.343 [2024-12-05 23:43:45.883334] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.343 [2024-12-05 23:43:46.042921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.601 [2024-12-05 23:43:46.151443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.166 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.166 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:14.166 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:14.166 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.166 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.167 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:14.167 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:14.167 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.167 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.167 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:14.167 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:14.167 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:14.167 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:14.167 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:14.167 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:14.426 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.427 1+0 records in 00:06:14.427 1+0 records out 00:06:14.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341612 s, 12.0 MB/s 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:14.427 23:43:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.688 1+0 records in 00:06:14.688 1+0 records out 00:06:14.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374705 s, 10.9 MB/s 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.688 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.689 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:14.689 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.689 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.689 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.689 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.950 1+0 records in 00:06:14.950 1+0 records out 00:06:14.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519341 s, 7.9 MB/s 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.950 1+0 records in 00:06:14.950 1+0 records out 00:06:14.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521501 s, 7.9 MB/s 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:14.950 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.211 1+0 records in 00:06:15.211 1+0 records out 00:06:15.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457095 s, 9.0 MB/s 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:15.211 23:43:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.470 1+0 records in 00:06:15.470 1+0 records out 00:06:15.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356821 s, 11.5 MB/s 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:15.470 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.729 1+0 records in 00:06:15.729 1+0 records out 00:06:15.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305111 s, 13.4 MB/s 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:15.729 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd0", 00:06:15.990 "bdev_name": "Nvme0n1" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd1", 00:06:15.990 "bdev_name": "Nvme1n1p1" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd2", 00:06:15.990 "bdev_name": "Nvme1n1p2" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd3", 00:06:15.990 "bdev_name": "Nvme2n1" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd4", 00:06:15.990 "bdev_name": "Nvme2n2" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd5", 00:06:15.990 "bdev_name": "Nvme2n3" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd6", 00:06:15.990 "bdev_name": "Nvme3n1" 00:06:15.990 } 00:06:15.990 ]' 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd0", 00:06:15.990 "bdev_name": "Nvme0n1" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd1", 00:06:15.990 "bdev_name": "Nvme1n1p1" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd2", 00:06:15.990 "bdev_name": "Nvme1n1p2" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd3", 00:06:15.990 "bdev_name": "Nvme2n1" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd4", 00:06:15.990 "bdev_name": "Nvme2n2" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd5", 00:06:15.990 "bdev_name": "Nvme2n3" 00:06:15.990 }, 00:06:15.990 { 00:06:15.990 "nbd_device": "/dev/nbd6", 00:06:15.990 "bdev_name": "Nvme3n1" 00:06:15.990 } 00:06:15.990 ]' 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.990 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.252 23:43:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:16.511 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:16.511 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:16.511 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:16.511 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.511 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.511 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:16.511 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.511 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.511 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.511 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:16.769 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:16.769 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:16.769 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:16.769 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.769 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.769 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:16.769 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.769 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.769 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.769 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.027 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.286 23:43:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:17.544 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:17.802 /dev/nbd0 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.802 1+0 records in 00:06:17.802 1+0 records out 00:06:17.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334088 s, 12.3 MB/s 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:17.802 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:18.060 /dev/nbd1 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.060 1+0 records in 00:06:18.060 1+0 records out 00:06:18.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356357 s, 11.5 MB/s 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:18.060 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:18.318 /dev/nbd10 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.318 1+0 records in 00:06:18.318 1+0 records out 00:06:18.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366399 s, 11.2 MB/s 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:18.318 23:43:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:18.576 /dev/nbd11 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.576 1+0 records in 00:06:18.576 1+0 records out 00:06:18.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328455 s, 12.5 MB/s 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.576 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.577 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.577 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.577 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:18.577 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:18.577 /dev/nbd12 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.834 1+0 records in 00:06:18.834 1+0 records out 00:06:18.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477442 s, 8.6 MB/s 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:18.834 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:18.834 /dev/nbd13 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.835 1+0 records in 00:06:18.835 1+0 records out 00:06:18.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555807 s, 7.4 MB/s 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.835 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:19.093 /dev/nbd14 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.093 1+0 records in 00:06:19.093 1+0 records out 00:06:19.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488428 s, 8.4 MB/s 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.093 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.351 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd0", 00:06:19.351 "bdev_name": "Nvme0n1" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd1", 00:06:19.351 "bdev_name": "Nvme1n1p1" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd10", 00:06:19.351 "bdev_name": "Nvme1n1p2" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd11", 00:06:19.351 "bdev_name": "Nvme2n1" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd12", 00:06:19.351 "bdev_name": "Nvme2n2" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd13", 00:06:19.351 "bdev_name": "Nvme2n3" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd14", 00:06:19.351 "bdev_name": "Nvme3n1" 00:06:19.351 } 00:06:19.351 ]' 00:06:19.351 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd0", 00:06:19.351 "bdev_name": "Nvme0n1" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd1", 00:06:19.351 "bdev_name": "Nvme1n1p1" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd10", 00:06:19.351 "bdev_name": "Nvme1n1p2" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd11", 00:06:19.351 "bdev_name": "Nvme2n1" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd12", 00:06:19.351 "bdev_name": "Nvme2n2" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd13", 00:06:19.351 "bdev_name": "Nvme2n3" 00:06:19.351 }, 00:06:19.351 { 00:06:19.351 "nbd_device": "/dev/nbd14", 00:06:19.351 "bdev_name": "Nvme3n1" 00:06:19.351 } 00:06:19.351 ]' 00:06:19.351 23:43:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.351 /dev/nbd1 00:06:19.351 /dev/nbd10 00:06:19.351 /dev/nbd11 00:06:19.351 /dev/nbd12 00:06:19.351 /dev/nbd13 00:06:19.351 /dev/nbd14' 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.351 /dev/nbd1 00:06:19.351 /dev/nbd10 00:06:19.351 /dev/nbd11 00:06:19.351 /dev/nbd12 00:06:19.351 /dev/nbd13 00:06:19.351 /dev/nbd14' 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:19.351 256+0 records in 00:06:19.351 256+0 records out 00:06:19.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00739956 s, 142 MB/s 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.351 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.609 256+0 records in 00:06:19.609 256+0 records out 00:06:19.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0832137 s, 12.6 MB/s 00:06:19.609 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.609 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.609 256+0 records in 00:06:19.609 256+0 records out 00:06:19.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.077904 s, 13.5 MB/s 00:06:19.609 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.609 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:19.609 256+0 records in 00:06:19.609 256+0 records out 00:06:19.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0743077 s, 14.1 MB/s 00:06:19.609 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.609 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:19.868 256+0 records in 00:06:19.868 256+0 records out 00:06:19.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0809989 s, 12.9 MB/s 00:06:19.868 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.868 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:19.868 256+0 records in 00:06:19.868 256+0 records out 00:06:19.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0834131 s, 12.6 MB/s 00:06:19.868 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.868 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:19.868 256+0 records in 00:06:19.868 256+0 records out 00:06:19.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0733526 s, 14.3 MB/s 00:06:19.868 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.868 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:20.127 256+0 records in 00:06:20.127 256+0 records out 00:06:20.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0744953 s, 14.1 MB/s 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.127 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.385 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.385 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.385 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.385 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.385 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.385 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.385 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.385 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.386 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.386 23:43:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.386 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.386 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.386 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.386 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.386 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.386 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.644 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:20.902 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:20.902 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:20.902 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:20.902 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.902 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.902 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:20.902 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.902 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.902 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.902 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:21.160 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:21.160 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:21.160 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:21.160 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.160 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.160 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:21.160 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.160 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.160 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.160 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:21.417 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:21.417 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:21.417 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:21.417 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.417 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.417 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:21.417 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.417 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.417 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.417 23:43:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.674 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.931 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.931 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:21.932 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:22.194 malloc_lvol_verify 00:06:22.194 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:22.194 f9b7265c-9713-46b0-bb31-86ef39628db2 00:06:22.194 23:43:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:22.454 96f5d29b-cbaa-49e8-890a-d69dd90fa432 00:06:22.454 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:22.712 /dev/nbd0 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:22.712 mke2fs 1.47.0 (5-Feb-2023) 00:06:22.712 Discarding device blocks: 0/4096 done 00:06:22.712 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:22.712 00:06:22.712 Allocating group tables: 0/1 done 00:06:22.712 Writing inode tables: 0/1 done 00:06:22.712 Creating journal (1024 blocks): done 00:06:22.712 Writing superblocks and filesystem accounting information: 0/1 done 00:06:22.712 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.712 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61443 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61443 ']' 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61443 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61443 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.970 killing process with pid 61443 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61443' 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61443 00:06:22.970 23:43:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61443 00:06:23.572 ************************************ 00:06:23.573 END TEST bdev_nbd 00:06:23.573 ************************************ 00:06:23.573 23:43:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:23.573 00:06:23.573 real 0m10.441s 00:06:23.573 user 0m14.880s 00:06:23.573 sys 0m3.444s 00:06:23.573 23:43:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.573 23:43:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:23.829 23:43:56 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:23.829 23:43:56 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:06:23.829 23:43:56 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:06:23.829 skipping fio tests on NVMe due to multi-ns failures. 00:06:23.829 23:43:56 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:23.829 23:43:56 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.829 23:43:56 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.829 23:43:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:23.829 23:43:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.829 23:43:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:23.829 ************************************ 00:06:23.829 START TEST bdev_verify 00:06:23.829 ************************************ 00:06:23.829 23:43:56 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.829 [2024-12-05 23:43:56.356912] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:23.829 [2024-12-05 23:43:56.357050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61854 ] 00:06:23.829 [2024-12-05 23:43:56.517244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.086 [2024-12-05 23:43:56.609242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.086 [2024-12-05 23:43:56.609451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.650 Running I/O for 5 seconds... 00:06:26.955 23936.00 IOPS, 93.50 MiB/s [2024-12-05T23:44:00.597Z] 22720.00 IOPS, 88.75 MiB/s [2024-12-05T23:44:01.531Z] 23232.00 IOPS, 90.75 MiB/s [2024-12-05T23:44:02.532Z] 22848.00 IOPS, 89.25 MiB/s [2024-12-05T23:44:02.532Z] 23257.60 IOPS, 90.85 MiB/s 00:06:29.823 Latency(us) 00:06:29.823 [2024-12-05T23:44:02.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:29.823 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x0 length 0xbd0bd 00:06:29.823 Nvme0n1 : 5.06 1668.42 6.52 0.00 0.00 76557.97 13611.32 78239.90 00:06:29.823 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:29.823 Nvme0n1 : 5.06 1631.63 6.37 0.00 0.00 78112.82 8973.39 70577.23 00:06:29.823 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x0 length 0x4ff80 00:06:29.823 Nvme1n1p1 : 5.07 1667.87 6.52 0.00 0.00 76471.89 15022.87 75820.11 00:06:29.823 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:29.823 Nvme1n1p1 : 5.08 1639.07 6.40 0.00 0.00 77763.67 13107.20 67754.14 00:06:29.823 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x0 length 0x4ff7f 00:06:29.823 Nvme1n1p2 : 5.07 1667.35 6.51 0.00 0.00 76390.24 16232.76 74206.92 00:06:29.823 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:29.823 Nvme1n1p2 : 5.08 1638.59 6.40 0.00 0.00 77633.56 11494.01 67754.14 00:06:29.823 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x0 length 0x80000 00:06:29.823 Nvme2n1 : 5.07 1666.30 6.51 0.00 0.00 76312.10 17946.78 70577.23 00:06:29.823 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x80000 length 0x80000 00:06:29.823 Nvme2n1 : 5.08 1638.13 6.40 0.00 0.00 77496.77 11998.13 68157.44 00:06:29.823 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x0 length 0x80000 00:06:29.823 Nvme2n2 : 5.07 1665.81 6.51 0.00 0.00 76227.60 17341.83 71787.13 00:06:29.823 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x80000 length 0x80000 00:06:29.823 Nvme2n2 : 5.08 1637.17 6.40 0.00 0.00 77375.57 13913.80 68157.44 00:06:29.823 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x0 length 0x80000 00:06:29.823 Nvme2n3 : 5.07 1665.35 6.51 0.00 0.00 76132.10 14922.04 76223.41 00:06:29.823 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x80000 length 0x80000 00:06:29.823 Nvme2n3 : 5.08 1636.73 6.39 0.00 0.00 77282.29 12703.90 69770.63 00:06:29.823 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x0 length 0x20000 00:06:29.823 Nvme3n1 : 5.07 1664.83 6.50 0.00 0.00 76033.50 8318.03 79046.50 00:06:29.823 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.823 Verification LBA range: start 0x20000 length 0x20000 00:06:29.823 Nvme3n1 : 5.08 1636.30 6.39 0.00 0.00 77226.05 9326.28 70173.93 00:06:29.823 [2024-12-05T23:44:02.532Z] =================================================================================================================== 00:06:29.823 [2024-12-05T23:44:02.532Z] Total : 23123.54 90.33 0.00 0.00 76924.30 8318.03 79046.50 00:06:31.722 00:06:31.722 real 0m7.786s 00:06:31.722 user 0m14.633s 00:06:31.722 sys 0m0.246s 00:06:31.722 23:44:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.722 23:44:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:31.722 ************************************ 00:06:31.722 END TEST bdev_verify 00:06:31.722 ************************************ 00:06:31.722 23:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.722 23:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:31.722 23:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.722 23:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:31.722 ************************************ 00:06:31.722 START TEST bdev_verify_big_io 00:06:31.722 ************************************ 00:06:31.722 23:44:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.722 [2024-12-05 23:44:04.183078] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:31.722 [2024-12-05 23:44:04.183206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:06:31.722 [2024-12-05 23:44:04.343462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.981 [2024-12-05 23:44:04.448234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.981 [2024-12-05 23:44:04.448458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.546 Running I/O for 5 seconds... 00:06:38.102 1247.00 IOPS, 77.94 MiB/s [2024-12-05T23:44:11.747Z] 2547.50 IOPS, 159.22 MiB/s [2024-12-05T23:44:11.747Z] 3085.67 IOPS, 192.85 MiB/s 00:06:39.038 Latency(us) 00:06:39.038 [2024-12-05T23:44:11.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.038 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x0 length 0xbd0b 00:06:39.038 Nvme0n1 : 5.86 95.60 5.98 0.00 0.00 1272783.16 11544.42 1477685.56 00:06:39.038 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:39.038 Nvme0n1 : 5.93 101.99 6.37 0.00 0.00 1200256.97 15728.64 1393799.48 00:06:39.038 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x0 length 0x4ff8 00:06:39.038 Nvme1n1p1 : 5.99 97.34 6.08 0.00 0.00 1211332.87 88322.36 1393799.48 00:06:39.038 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x4ff8 length 0x4ff8 00:06:39.038 Nvme1n1p1 : 5.93 104.16 6.51 0.00 0.00 1132262.71 83886.08 1200216.22 00:06:39.038 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x0 length 0x4ff7 00:06:39.038 Nvme1n1p2 : 5.99 93.46 5.84 0.00 0.00 1221463.57 129055.51 1858399.31 00:06:39.038 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x4ff7 length 0x4ff7 00:06:39.038 Nvme1n1p2 : 6.00 107.28 6.71 0.00 0.00 1066778.29 104857.60 1038896.84 00:06:39.038 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x0 length 0x8000 00:06:39.038 Nvme2n1 : 6.11 94.90 5.93 0.00 0.00 1149581.40 117763.15 1884210.41 00:06:39.038 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x8000 length 0x8000 00:06:39.038 Nvme2n1 : 6.08 110.73 6.92 0.00 0.00 998482.20 67754.14 1206669.00 00:06:39.038 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x0 length 0x8000 00:06:39.038 Nvme2n2 : 6.25 105.28 6.58 0.00 0.00 1010574.93 36095.21 2206849.18 00:06:39.038 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x8000 length 0x8000 00:06:39.038 Nvme2n2 : 6.08 115.80 7.24 0.00 0.00 932456.69 68560.74 1219574.55 00:06:39.038 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x0 length 0x8000 00:06:39.038 Nvme2n3 : 6.25 109.73 6.86 0.00 0.00 935338.27 16535.24 2245565.83 00:06:39.038 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x8000 length 0x8000 00:06:39.038 Nvme2n3 : 6.15 124.90 7.81 0.00 0.00 837694.49 36700.16 1232480.10 00:06:39.038 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x0 length 0x2000 00:06:39.038 Nvme3n1 : 6.35 153.66 9.60 0.00 0.00 648446.51 289.87 2284282.49 00:06:39.038 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.038 Verification LBA range: start 0x2000 length 0x2000 00:06:39.038 Nvme3n1 : 6.23 144.24 9.01 0.00 0.00 702164.61 857.01 1251838.42 00:06:39.038 [2024-12-05T23:44:11.747Z] =================================================================================================================== 00:06:39.038 [2024-12-05T23:44:11.747Z] Total : 1559.08 97.44 0.00 0.00 990697.00 289.87 2284282.49 00:06:42.319 00:06:42.319 real 0m10.663s 00:06:42.319 user 0m20.308s 00:06:42.319 sys 0m0.301s 00:06:42.319 23:44:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.319 23:44:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:42.319 ************************************ 00:06:42.319 END TEST bdev_verify_big_io 00:06:42.319 ************************************ 00:06:42.319 23:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.319 23:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:42.319 23:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.319 23:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:42.319 ************************************ 00:06:42.319 START TEST bdev_write_zeroes 00:06:42.319 ************************************ 00:06:42.319 23:44:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.319 [2024-12-05 23:44:14.882209] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:42.319 [2024-12-05 23:44:14.882343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62072 ] 00:06:42.583 [2024-12-05 23:44:15.043472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.583 [2024-12-05 23:44:15.143931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.191 Running I/O for 1 seconds... 00:06:44.130 59549.00 IOPS, 232.61 MiB/s 00:06:44.130 Latency(us) 00:06:44.130 [2024-12-05T23:44:16.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.130 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.130 Nvme0n1 : 1.03 8436.29 32.95 0.00 0.00 15137.97 6175.51 32263.88 00:06:44.130 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.130 Nvme1n1p1 : 1.03 8460.05 33.05 0.00 0.00 15075.33 11342.77 28432.54 00:06:44.130 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.130 Nvme1n1p2 : 1.03 8449.62 33.01 0.00 0.00 14965.08 8872.57 28432.54 00:06:44.130 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.130 Nvme2n1 : 1.03 8440.15 32.97 0.00 0.00 14919.09 6956.90 27424.30 00:06:44.130 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.130 Nvme2n2 : 1.03 8430.72 32.93 0.00 0.00 14914.28 6704.84 27827.59 00:06:44.130 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.130 Nvme2n3 : 1.03 8421.32 32.90 0.00 0.00 14912.07 6351.95 26617.70 00:06:44.130 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.130 Nvme3n1 : 1.03 8411.78 32.86 0.00 0.00 14908.50 6402.36 27424.30 00:06:44.130 [2024-12-05T23:44:16.839Z] =================================================================================================================== 00:06:44.130 [2024-12-05T23:44:16.839Z] Total : 59049.92 230.66 0.00 0.00 14975.95 6175.51 32263.88 00:06:45.064 00:06:45.064 real 0m2.720s 00:06:45.064 user 0m2.409s 00:06:45.064 sys 0m0.195s 00:06:45.064 23:44:17 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.064 ************************************ 00:06:45.064 END TEST bdev_write_zeroes 00:06:45.064 ************************************ 00:06:45.064 23:44:17 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:45.064 23:44:17 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.064 23:44:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:45.064 23:44:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.064 23:44:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.064 ************************************ 00:06:45.064 START TEST bdev_json_nonenclosed 00:06:45.064 ************************************ 00:06:45.064 23:44:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.064 [2024-12-05 23:44:17.658924] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:45.064 [2024-12-05 23:44:17.659066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62125 ] 00:06:45.322 [2024-12-05 23:44:17.815662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.322 [2024-12-05 23:44:17.932485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.322 [2024-12-05 23:44:17.932594] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:45.322 [2024-12-05 23:44:17.932614] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:45.322 [2024-12-05 23:44:17.932624] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.580 00:06:45.580 real 0m0.546s 00:06:45.580 user 0m0.349s 00:06:45.580 sys 0m0.090s 00:06:45.580 23:44:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.580 23:44:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:45.580 ************************************ 00:06:45.580 END TEST bdev_json_nonenclosed 00:06:45.580 ************************************ 00:06:45.580 23:44:18 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.580 23:44:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:45.580 23:44:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.580 23:44:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.580 ************************************ 00:06:45.580 START TEST bdev_json_nonarray 00:06:45.580 ************************************ 00:06:45.580 23:44:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.581 [2024-12-05 23:44:18.275635] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:45.581 [2024-12-05 23:44:18.275796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62151 ] 00:06:45.840 [2024-12-05 23:44:18.444268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.099 [2024-12-05 23:44:18.588580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.100 [2024-12-05 23:44:18.588715] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:46.100 [2024-12-05 23:44:18.588736] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:46.100 [2024-12-05 23:44:18.588747] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.100 00:06:46.100 real 0m0.593s 00:06:46.100 user 0m0.370s 00:06:46.100 sys 0m0.116s 00:06:46.100 23:44:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.100 23:44:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:46.100 ************************************ 00:06:46.100 END TEST bdev_json_nonarray 00:06:46.100 ************************************ 00:06:46.359 23:44:18 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:06:46.359 23:44:18 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:06:46.359 23:44:18 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:06:46.359 23:44:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.359 23:44:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.359 23:44:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:46.359 ************************************ 00:06:46.359 START TEST bdev_gpt_uuid 00:06:46.359 ************************************ 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62176 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62176 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62176 ']' 00:06:46.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.359 23:44:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:46.359 [2024-12-05 23:44:18.956812] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:46.359 [2024-12-05 23:44:18.957004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62176 ] 00:06:46.618 [2024-12-05 23:44:19.126871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.618 [2024-12-05 23:44:19.271518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.556 23:44:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.556 23:44:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:06:47.556 23:44:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:47.556 23:44:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.556 23:44:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:47.815 Some configs were skipped because the RPC state that can call them passed over. 00:06:47.815 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.815 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:06:47.815 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.815 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:47.815 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.815 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:06:47.815 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.815 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:47.815 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:06:47.816 { 00:06:47.816 "name": "Nvme1n1p1", 00:06:47.816 "aliases": [ 00:06:47.816 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:06:47.816 ], 00:06:47.816 "product_name": "GPT Disk", 00:06:47.816 "block_size": 4096, 00:06:47.816 "num_blocks": 655104, 00:06:47.816 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:47.816 "assigned_rate_limits": { 00:06:47.816 "rw_ios_per_sec": 0, 00:06:47.816 "rw_mbytes_per_sec": 0, 00:06:47.816 "r_mbytes_per_sec": 0, 00:06:47.816 "w_mbytes_per_sec": 0 00:06:47.816 }, 00:06:47.816 "claimed": false, 00:06:47.816 "zoned": false, 00:06:47.816 "supported_io_types": { 00:06:47.816 "read": true, 00:06:47.816 "write": true, 00:06:47.816 "unmap": true, 00:06:47.816 "flush": true, 00:06:47.816 "reset": true, 00:06:47.816 "nvme_admin": false, 00:06:47.816 "nvme_io": false, 00:06:47.816 "nvme_io_md": false, 00:06:47.816 "write_zeroes": true, 00:06:47.816 "zcopy": false, 00:06:47.816 "get_zone_info": false, 00:06:47.816 "zone_management": false, 00:06:47.816 "zone_append": false, 00:06:47.816 "compare": true, 00:06:47.816 "compare_and_write": false, 00:06:47.816 "abort": true, 00:06:47.816 "seek_hole": false, 00:06:47.816 "seek_data": false, 00:06:47.816 "copy": true, 00:06:47.816 "nvme_iov_md": false 00:06:47.816 }, 00:06:47.816 "driver_specific": { 00:06:47.816 "gpt": { 00:06:47.816 "base_bdev": "Nvme1n1", 00:06:47.816 "offset_blocks": 256, 00:06:47.816 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:06:47.816 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:47.816 "partition_name": "SPDK_TEST_first" 00:06:47.816 } 00:06:47.816 } 00:06:47.816 } 00:06:47.816 ]' 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:06:47.816 { 00:06:47.816 "name": "Nvme1n1p2", 00:06:47.816 "aliases": [ 00:06:47.816 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:06:47.816 ], 00:06:47.816 "product_name": "GPT Disk", 00:06:47.816 "block_size": 4096, 00:06:47.816 "num_blocks": 655103, 00:06:47.816 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:47.816 "assigned_rate_limits": { 00:06:47.816 "rw_ios_per_sec": 0, 00:06:47.816 "rw_mbytes_per_sec": 0, 00:06:47.816 "r_mbytes_per_sec": 0, 00:06:47.816 "w_mbytes_per_sec": 0 00:06:47.816 }, 00:06:47.816 "claimed": false, 00:06:47.816 "zoned": false, 00:06:47.816 "supported_io_types": { 00:06:47.816 "read": true, 00:06:47.816 "write": true, 00:06:47.816 "unmap": true, 00:06:47.816 "flush": true, 00:06:47.816 "reset": true, 00:06:47.816 "nvme_admin": false, 00:06:47.816 "nvme_io": false, 00:06:47.816 "nvme_io_md": false, 00:06:47.816 "write_zeroes": true, 00:06:47.816 "zcopy": false, 00:06:47.816 "get_zone_info": false, 00:06:47.816 "zone_management": false, 00:06:47.816 "zone_append": false, 00:06:47.816 "compare": true, 00:06:47.816 "compare_and_write": false, 00:06:47.816 "abort": true, 00:06:47.816 "seek_hole": false, 00:06:47.816 "seek_data": false, 00:06:47.816 "copy": true, 00:06:47.816 "nvme_iov_md": false 00:06:47.816 }, 00:06:47.816 "driver_specific": { 00:06:47.816 "gpt": { 00:06:47.816 "base_bdev": "Nvme1n1", 00:06:47.816 "offset_blocks": 655360, 00:06:47.816 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:06:47.816 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:47.816 "partition_name": "SPDK_TEST_second" 00:06:47.816 } 00:06:47.816 } 00:06:47.816 } 00:06:47.816 ]' 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62176 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62176 ']' 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62176 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.816 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62176 00:06:48.078 killing process with pid 62176 00:06:48.078 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.078 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.078 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62176' 00:06:48.078 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62176 00:06:48.078 23:44:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62176 00:06:50.016 ************************************ 00:06:50.016 END TEST bdev_gpt_uuid 00:06:50.016 ************************************ 00:06:50.016 00:06:50.016 real 0m3.356s 00:06:50.016 user 0m3.336s 00:06:50.016 sys 0m0.484s 00:06:50.016 23:44:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.016 23:44:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:50.016 23:44:22 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:06:50.016 23:44:22 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:50.016 23:44:22 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:06:50.016 23:44:22 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:50.016 23:44:22 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:50.016 23:44:22 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:06:50.016 23:44:22 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:06:50.016 23:44:22 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:06:50.016 23:44:22 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:50.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:50.016 Waiting for block devices as requested 00:06:50.277 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:50.277 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:50.277 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:50.539 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:55.834 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:55.834 23:44:28 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:06:55.834 23:44:28 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:06:55.834 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:55.834 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:55.834 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:55.834 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:55.834 23:44:28 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:06:55.834 00:06:55.834 real 0m58.290s 00:06:55.834 user 1m15.803s 00:06:55.834 sys 0m8.022s 00:06:55.834 23:44:28 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.834 ************************************ 00:06:55.834 END TEST blockdev_nvme_gpt 00:06:55.834 ************************************ 00:06:55.834 23:44:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.834 23:44:28 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:55.834 23:44:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.834 23:44:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.834 23:44:28 -- common/autotest_common.sh@10 -- # set +x 00:06:55.834 ************************************ 00:06:55.834 START TEST nvme 00:06:55.834 ************************************ 00:06:55.834 23:44:28 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:56.095 * Looking for test storage... 00:06:56.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:06:56.096 23:44:28 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.096 23:44:28 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.096 23:44:28 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.096 23:44:28 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.096 23:44:28 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.096 23:44:28 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.096 23:44:28 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.096 23:44:28 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.096 23:44:28 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.096 23:44:28 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.096 23:44:28 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.096 23:44:28 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.096 23:44:28 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.096 23:44:28 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.096 23:44:28 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.096 23:44:28 nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:56.096 23:44:28 nvme -- scripts/common.sh@345 -- # : 1 00:06:56.096 23:44:28 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.096 23:44:28 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.096 23:44:28 nvme -- scripts/common.sh@365 -- # decimal 1 00:06:56.096 23:44:28 nvme -- scripts/common.sh@353 -- # local d=1 00:06:56.096 23:44:28 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.096 23:44:28 nvme -- scripts/common.sh@355 -- # echo 1 00:06:56.096 23:44:28 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.096 23:44:28 nvme -- scripts/common.sh@366 -- # decimal 2 00:06:56.096 23:44:28 nvme -- scripts/common.sh@353 -- # local d=2 00:06:56.096 23:44:28 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.096 23:44:28 nvme -- scripts/common.sh@355 -- # echo 2 00:06:56.096 23:44:28 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.096 23:44:28 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.096 23:44:28 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.096 23:44:28 nvme -- scripts/common.sh@368 -- # return 0 00:06:56.096 23:44:28 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.096 23:44:28 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.096 --rc genhtml_branch_coverage=1 00:06:56.096 --rc genhtml_function_coverage=1 00:06:56.096 --rc genhtml_legend=1 00:06:56.096 --rc geninfo_all_blocks=1 00:06:56.096 --rc geninfo_unexecuted_blocks=1 00:06:56.096 00:06:56.096 ' 00:06:56.096 23:44:28 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.096 --rc genhtml_branch_coverage=1 00:06:56.096 --rc genhtml_function_coverage=1 00:06:56.096 --rc genhtml_legend=1 00:06:56.096 --rc geninfo_all_blocks=1 00:06:56.096 --rc geninfo_unexecuted_blocks=1 00:06:56.096 00:06:56.096 ' 00:06:56.096 23:44:28 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.096 --rc genhtml_branch_coverage=1 00:06:56.096 --rc genhtml_function_coverage=1 00:06:56.096 --rc genhtml_legend=1 00:06:56.096 --rc geninfo_all_blocks=1 00:06:56.096 --rc geninfo_unexecuted_blocks=1 00:06:56.096 00:06:56.096 ' 00:06:56.096 23:44:28 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.096 --rc genhtml_branch_coverage=1 00:06:56.096 --rc genhtml_function_coverage=1 00:06:56.096 --rc genhtml_legend=1 00:06:56.096 --rc geninfo_all_blocks=1 00:06:56.096 --rc geninfo_unexecuted_blocks=1 00:06:56.096 00:06:56.096 ' 00:06:56.096 23:44:28 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:56.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.242 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:57.242 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:57.242 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:57.243 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:57.243 23:44:29 nvme -- nvme/nvme.sh@79 -- # uname 00:06:57.243 23:44:29 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:06:57.243 23:44:29 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:06:57.243 23:44:29 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:06:57.243 23:44:29 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:06:57.243 23:44:29 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:06:57.243 23:44:29 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:06:57.243 Waiting for stub to ready for secondary processes... 00:06:57.243 23:44:29 nvme -- common/autotest_common.sh@1075 -- # stubpid=62822 00:06:57.243 23:44:29 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:06:57.243 23:44:29 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:06:57.243 23:44:29 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62822 ]] 00:06:57.243 23:44:29 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:06:57.243 23:44:29 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:06:57.243 [2024-12-05 23:44:29.904092] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:06:57.243 [2024-12-05 23:44:29.904729] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:06:58.186 23:44:30 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:06:58.186 23:44:30 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62822 ]] 00:06:58.186 23:44:30 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:06:58.448 [2024-12-05 23:44:31.093803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.713 [2024-12-05 23:44:31.212460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.713 [2024-12-05 23:44:31.212885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.713 [2024-12-05 23:44:31.213012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.713 [2024-12-05 23:44:31.231092] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:06:58.713 [2024-12-05 23:44:31.231151] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:58.713 [2024-12-05 23:44:31.247270] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:06:58.713 [2024-12-05 23:44:31.247557] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:06:58.713 [2024-12-05 23:44:31.252541] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:58.713 [2024-12-05 23:44:31.253059] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:06:58.713 [2024-12-05 23:44:31.253257] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:06:58.713 [2024-12-05 23:44:31.256950] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:58.713 [2024-12-05 23:44:31.257183] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:06:58.713 [2024-12-05 23:44:31.257260] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:06:58.713 [2024-12-05 23:44:31.259218] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:06:58.713 [2024-12-05 23:44:31.259405] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:06:58.713 [2024-12-05 23:44:31.259463] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:06:58.713 [2024-12-05 23:44:31.259499] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:06:58.713 [2024-12-05 23:44:31.259529] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:06:59.279 23:44:31 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:06:59.279 done. 00:06:59.279 23:44:31 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:06:59.279 23:44:31 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:06:59.279 23:44:31 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:06:59.279 23:44:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.279 23:44:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.279 ************************************ 00:06:59.279 START TEST nvme_reset 00:06:59.279 ************************************ 00:06:59.279 23:44:31 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:06:59.537 Initializing NVMe Controllers 00:06:59.537 Skipping QEMU NVMe SSD at 0000:00:13.0 00:06:59.537 Skipping QEMU NVMe SSD at 0000:00:10.0 00:06:59.537 Skipping QEMU NVMe SSD at 0000:00:11.0 00:06:59.537 Skipping QEMU NVMe SSD at 0000:00:12.0 00:06:59.537 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:06:59.537 00:06:59.537 real 0m0.227s 00:06:59.537 user 0m0.073s 00:06:59.537 sys 0m0.108s 00:06:59.537 23:44:32 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.537 23:44:32 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:06:59.537 ************************************ 00:06:59.537 END TEST nvme_reset 00:06:59.537 ************************************ 00:06:59.537 23:44:32 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:06:59.537 23:44:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.537 23:44:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.537 23:44:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.537 ************************************ 00:06:59.537 START TEST nvme_identify 00:06:59.537 ************************************ 00:06:59.537 23:44:32 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:06:59.537 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:06:59.537 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:06:59.537 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:06:59.537 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:06:59.537 23:44:32 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:59.537 23:44:32 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:06:59.537 23:44:32 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:59.537 23:44:32 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:59.537 23:44:32 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:59.537 23:44:32 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:59.537 23:44:32 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:59.798 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:06:59.798 ===================================================== 00:06:59.798 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:06:59.798 ===================================================== 00:06:59.798 Controller Capabilities/Features 00:06:59.798 ================================ 00:06:59.798 Vendor ID: 1b36 00:06:59.798 Subsystem Vendor ID: 1af4 00:06:59.798 Serial Number: 12343 00:06:59.798 Model Number: QEMU NVMe Ctrl 00:06:59.798 Firmware Version: 8.0.0 00:06:59.798 Recommended Arb Burst: 6 00:06:59.798 IEEE OUI Identifier: 00 54 52 00:06:59.798 Multi-path I/O 00:06:59.798 May have multiple subsystem ports: No 00:06:59.798 May have multiple controllers: Yes 00:06:59.798 Associated with SR-IOV VF: No 00:06:59.798 Max Data Transfer Size: 524288 00:06:59.798 Max Number of Namespaces: 256 00:06:59.798 Max Number of I/O Queues: 64 00:06:59.798 NVMe Specification Version (VS): 1.4 00:06:59.798 NVMe Specification Version (Identify): 1.4 00:06:59.798 Maximum Queue Entries: 2048 00:06:59.798 Contiguous Queues Required: Yes 00:06:59.798 Arbitration Mechanisms Supported 00:06:59.798 Weighted Round Robin: Not Supported 00:06:59.798 Vendor Specific: Not Supported 00:06:59.798 Reset Timeout: 7500 ms 00:06:59.798 Doorbell Stride: 4 bytes 00:06:59.798 NVM Subsystem Reset: Not Supported 00:06:59.798 Command Sets Supported 00:06:59.798 NVM Command Set: Supported 00:06:59.798 Boot Partition: Not Supported 00:06:59.798 Memory Page Size Minimum: 4096 bytes 00:06:59.798 Memory Page Size Maximum: 65536 bytes 00:06:59.798 Persistent Memory Region: Not Supported 00:06:59.798 Optional Asynchronous Events Supported 00:06:59.798 Namespace Attribute Notices: Supported 00:06:59.798 Firmware Activation Notices: Not Supported 00:06:59.798 ANA Change Notices: Not Supported 00:06:59.798 PLE Aggregate Log Change Notices: Not Supported 00:06:59.798 LBA Status Info Alert Notices: Not Supported 00:06:59.798 EGE Aggregate Log Change Notices: Not Supported 00:06:59.798 Normal NVM Subsystem Shutdown event: Not Supported 00:06:59.798 Zone Descriptor Change Notices: Not Supported 00:06:59.798 Discovery Log Change Notices: Not Supported 00:06:59.798 Controller Attributes 00:06:59.798 128-bit Host Identifier: Not Supported 00:06:59.798 Non-Operational Permissive Mode: Not Supported 00:06:59.798 NVM Sets: Not Supported 00:06:59.798 Read Recovery Levels: Not Supported 00:06:59.798 Endurance Groups: Supported 00:06:59.798 Predictable Latency Mode: Not Supported 00:06:59.798 Traffic Based Keep ALive: Not Supported 00:06:59.798 Namespace Granularity: Not Supported 00:06:59.798 SQ Associations: Not Supported 00:06:59.798 UUID List: Not Supported 00:06:59.798 Multi-Domain Subsystem: Not Supported 00:06:59.798 Fixed Capacity Management: Not Supported 00:06:59.798 Variable Capacity Management: Not Supported 00:06:59.798 Delete Endurance Group: Not Supported 00:06:59.798 Delete NVM Set: Not Supported 00:06:59.798 Extended LBA Formats Supported: Supported 00:06:59.798 Flexible Data Placement Supported: Supported 00:06:59.798 00:06:59.798 Controller Memory Buffer Support 00:06:59.798 ================================ 00:06:59.798 Supported: No 00:06:59.798 00:06:59.798 Persistent Memory Region Support 00:06:59.798 ================================ 00:06:59.798 Supported: No 00:06:59.798 00:06:59.798 Admin Command Set Attributes 00:06:59.798 ============================ 00:06:59.798 Security Send/Receive: Not Supported 00:06:59.798 Format NVM: Supported 00:06:59.798 Firmware Activate/Download: Not Supported 00:06:59.798 Namespace Management: Supported 00:06:59.798 Device Self-Test: Not Supported 00:06:59.798 Directives: Supported 00:06:59.798 NVMe-MI: Not Supported 00:06:59.798 Virtualization Management: Not Supported 00:06:59.798 Doorbell Buffer Config: Supported 00:06:59.798 Get LBA Status Capability: Not Supported 00:06:59.798 Command & Feature Lockdown Capability: Not Supported 00:06:59.798 Abort Command Limit: 4 00:06:59.798 Async Event Request Limit: 4 00:06:59.798 Number of Firmware Slots: N/A 00:06:59.798 Firmware Slot 1 Read-Only: N/A 00:06:59.798 Firmware Activation Without Reset: N/A 00:06:59.798 Multiple Update Detection Support: N/A 00:06:59.798 Firmware Update Granularity: No Information Provided 00:06:59.798 Per-Namespace SMART Log: Yes 00:06:59.798 Asymmetric Namespace Access Log Page: Not Supported 00:06:59.798 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:06:59.798 Command Effects Log Page: Supported 00:06:59.798 Get Log Page Extended Data: Supported 00:06:59.798 Telemetry Log Pages: Not Supported 00:06:59.798 Persistent Event Log Pages: Not Supported 00:06:59.798 Supported Log Pages Log Page: May Support 00:06:59.798 Commands Supported & Effects Log Page: Not Supported 00:06:59.798 Feature Identifiers & Effects Log Page:May Support 00:06:59.798 NVMe-MI Commands & Effects Log Page: May Support 00:06:59.798 Data Area 4 for Telemetry Log: Not Supported 00:06:59.798 Error Log Page Entries Supported: 1 00:06:59.798 Keep Alive: Not Supported 00:06:59.798 00:06:59.798 NVM Command Set Attributes 00:06:59.798 ========================== 00:06:59.798 Submission Queue Entry Size 00:06:59.798 Max: 64 00:06:59.798 Min: 64 00:06:59.798 Completion Queue Entry Size 00:06:59.798 Max: 16 00:06:59.798 Min: 16 00:06:59.798 Number of Namespaces: 256 00:06:59.798 Compare Command: Supported 00:06:59.798 Write Uncorrectable Command: Not Supported 00:06:59.798 Dataset Management Command: Supported 00:06:59.798 Write Zeroes Command: Supported 00:06:59.798 Set Features Save Field: Supported 00:06:59.798 Reservations: Not Supported 00:06:59.798 Timestamp: Supported 00:06:59.798 Copy: Supported 00:06:59.798 Volatile Write Cache: Present 00:06:59.798 Atomic Write Unit (Normal): 1 00:06:59.798 Atomic Write Unit (PFail): 1 00:06:59.798 Atomic Compare & Write Unit: 1 00:06:59.798 Fused Compare & Write: Not Supported 00:06:59.798 Scatter-Gather List 00:06:59.798 SGL Command Set: Supported 00:06:59.798 SGL Keyed: Not Supported 00:06:59.798 SGL Bit Bucket Descriptor: Not Supported 00:06:59.798 SGL Metadata Pointer: Not Supported 00:06:59.798 Oversized SGL: Not Supported 00:06:59.798 SGL Metadata Address: Not Supported 00:06:59.798 SGL Offset: Not Supported 00:06:59.798 Transport SGL Data Block: Not Supported 00:06:59.798 Replay Protected Memory Block: Not Supported 00:06:59.798 00:06:59.798 Firmware Slot Information 00:06:59.798 ========================= 00:06:59.798 Active slot: 1 00:06:59.798 Slot 1 Firmware Revision: 1.0 00:06:59.798 00:06:59.798 00:06:59.798 Commands Supported and Effects 00:06:59.799 ============================== 00:06:59.799 Admin Commands 00:06:59.799 -------------- 00:06:59.799 Delete I/O Submission Queue (00h): Supported 00:06:59.799 Create I/O Submission Queue (01h): Supported 00:06:59.799 Get Log Page (02h): Supported 00:06:59.799 Delete I/O Completion Queue (04h): Supported 00:06:59.799 Create I/O Completion Queue (05h): Supported 00:06:59.799 Identify (06h): Supported 00:06:59.799 Abort (08h): Supported 00:06:59.799 Set Features (09h): Supported 00:06:59.799 Get Features (0Ah): Supported 00:06:59.799 Asynchronous Event Request (0Ch): Supported 00:06:59.799 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:59.799 Directive Send (19h): Supported 00:06:59.799 Directive Receive (1Ah): Supported 00:06:59.799 Virtualization Management (1Ch): Supported 00:06:59.799 Doorbell Buffer Config (7Ch): Supported 00:06:59.799 Format NVM (80h): Supported LBA-Change 00:06:59.799 I/O Commands 00:06:59.799 ------------ 00:06:59.799 Flush (00h): Supported LBA-Change 00:06:59.799 Write (01h): Supported LBA-Change 00:06:59.799 Read (02h): Supported 00:06:59.799 Compare (05h): Supported 00:06:59.799 Write Zeroes (08h): Supported LBA-Change 00:06:59.799 Dataset Management (09h): Supported LBA-Change 00:06:59.799 Unknown (0Ch): Supported 00:06:59.799 Unknown (12h): Supported 00:06:59.799 Copy (19h): Supported LBA-Change 00:06:59.799 Unknown (1Dh): Supported LBA-Change 00:06:59.799 00:06:59.799 Error Log 00:06:59.799 ========= 00:06:59.799 00:06:59.799 Arbitration 00:06:59.799 =========== 00:06:59.799 Arbitration Burst: no limit 00:06:59.799 00:06:59.799 Power Management 00:06:59.799 ================ 00:06:59.799 Number of Power States: 1 00:06:59.799 Current Power State: Power State #0 00:06:59.799 Power State #0: 00:06:59.799 Max Power: 25.00 W 00:06:59.799 Non-Operational State: Operational 00:06:59.799 Entry Latency: 16 microseconds 00:06:59.799 Exit Latency: 4 microseconds 00:06:59.799 Relative Read Throughput: 0 00:06:59.799 Relative Read Latency: 0 00:06:59.799 Relative Write Throughput: 0 00:06:59.799 Relative Write Latency: 0 00:06:59.799 Idle Power:[2024-12-05 23:44:32.435390] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62850 terminated unexpected 00:06:59.799 Not Reported 00:06:59.799 Active Power: Not Reported 00:06:59.799 Non-Operational Permissive Mode: Not Supported 00:06:59.799 00:06:59.799 Health Information 00:06:59.799 ================== 00:06:59.799 Critical Warnings: 00:06:59.799 Available Spare Space: OK 00:06:59.799 Temperature: OK 00:06:59.799 Device Reliability: OK 00:06:59.799 Read Only: No 00:06:59.799 Volatile Memory Backup: OK 00:06:59.799 Current Temperature: 323 Kelvin (50 Celsius) 00:06:59.799 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:59.799 Available Spare: 0% 00:06:59.799 Available Spare Threshold: 0% 00:06:59.799 Life Percentage Used: 0% 00:06:59.799 Data Units Read: 859 00:06:59.799 Data Units Written: 788 00:06:59.799 Host Read Commands: 41830 00:06:59.799 Host Write Commands: 41253 00:06:59.799 Controller Busy Time: 0 minutes 00:06:59.799 Power Cycles: 0 00:06:59.799 Power On Hours: 0 hours 00:06:59.799 Unsafe Shutdowns: 0 00:06:59.799 Unrecoverable Media Errors: 0 00:06:59.799 Lifetime Error Log Entries: 0 00:06:59.799 Warning Temperature Time: 0 minutes 00:06:59.799 Critical Temperature Time: 0 minutes 00:06:59.799 00:06:59.799 Number of Queues 00:06:59.799 ================ 00:06:59.799 Number of I/O Submission Queues: 64 00:06:59.799 Number of I/O Completion Queues: 64 00:06:59.799 00:06:59.799 ZNS Specific Controller Data 00:06:59.799 ============================ 00:06:59.799 Zone Append Size Limit: 0 00:06:59.799 00:06:59.799 00:06:59.799 Active Namespaces 00:06:59.799 ================= 00:06:59.799 Namespace ID:1 00:06:59.799 Error Recovery Timeout: Unlimited 00:06:59.799 Command Set Identifier: NVM (00h) 00:06:59.799 Deallocate: Supported 00:06:59.799 Deallocated/Unwritten Error: Supported 00:06:59.799 Deallocated Read Value: All 0x00 00:06:59.799 Deallocate in Write Zeroes: Not Supported 00:06:59.799 Deallocated Guard Field: 0xFFFF 00:06:59.799 Flush: Supported 00:06:59.799 Reservation: Not Supported 00:06:59.799 Namespace Sharing Capabilities: Multiple Controllers 00:06:59.799 Size (in LBAs): 262144 (1GiB) 00:06:59.799 Capacity (in LBAs): 262144 (1GiB) 00:06:59.799 Utilization (in LBAs): 262144 (1GiB) 00:06:59.799 Thin Provisioning: Not Supported 00:06:59.799 Per-NS Atomic Units: No 00:06:59.799 Maximum Single Source Range Length: 128 00:06:59.799 Maximum Copy Length: 128 00:06:59.799 Maximum Source Range Count: 128 00:06:59.799 NGUID/EUI64 Never Reused: No 00:06:59.799 Namespace Write Protected: No 00:06:59.799 Endurance group ID: 1 00:06:59.799 Number of LBA Formats: 8 00:06:59.799 Current LBA Format: LBA Format #04 00:06:59.799 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:59.799 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:59.799 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:59.799 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:59.799 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:59.799 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:59.799 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:59.799 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:59.799 00:06:59.799 Get Feature FDP: 00:06:59.799 ================ 00:06:59.799 Enabled: Yes 00:06:59.799 FDP configuration index: 0 00:06:59.799 00:06:59.799 FDP configurations log page 00:06:59.799 =========================== 00:06:59.799 Number of FDP configurations: 1 00:06:59.799 Version: 0 00:06:59.799 Size: 112 00:06:59.799 FDP Configuration Descriptor: 0 00:06:59.799 Descriptor Size: 96 00:06:59.799 Reclaim Group Identifier format: 2 00:06:59.799 FDP Volatile Write Cache: Not Present 00:06:59.799 FDP Configuration: Valid 00:06:59.799 Vendor Specific Size: 0 00:06:59.799 Number of Reclaim Groups: 2 00:06:59.799 Number of Recalim Unit Handles: 8 00:06:59.799 Max Placement Identifiers: 128 00:06:59.799 Number of Namespaces Suppprted: 256 00:06:59.799 Reclaim unit Nominal Size: 6000000 bytes 00:06:59.799 Estimated Reclaim Unit Time Limit: Not Reported 00:06:59.799 RUH Desc #000: RUH Type: Initially Isolated 00:06:59.799 RUH Desc #001: RUH Type: Initially Isolated 00:06:59.799 RUH Desc #002: RUH Type: Initially Isolated 00:06:59.799 RUH Desc #003: RUH Type: Initially Isolated 00:06:59.799 RUH Desc #004: RUH Type: Initially Isolated 00:06:59.799 RUH Desc #005: RUH Type: Initially Isolated 00:06:59.799 RUH Desc #006: RUH Type: Initially Isolated 00:06:59.799 RUH Desc #007: RUH Type: Initially Isolated 00:06:59.799 00:06:59.799 FDP reclaim unit handle usage log page 00:06:59.799 ==================================[2024-12-05 23:44:32.437866] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62850 terminated unexpected 00:06:59.799 ==== 00:06:59.799 Number of Reclaim Unit Handles: 8 00:06:59.799 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:06:59.799 RUH Usage Desc #001: RUH Attributes: Unused 00:06:59.799 RUH Usage Desc #002: RUH Attributes: Unused 00:06:59.799 RUH Usage Desc #003: RUH Attributes: Unused 00:06:59.799 RUH Usage Desc #004: RUH Attributes: Unused 00:06:59.799 RUH Usage Desc #005: RUH Attributes: Unused 00:06:59.799 RUH Usage Desc #006: RUH Attributes: Unused 00:06:59.799 RUH Usage Desc #007: RUH Attributes: Unused 00:06:59.799 00:06:59.799 FDP statistics log page 00:06:59.799 ======================= 00:06:59.799 Host bytes with metadata written: 505323520 00:06:59.799 Media bytes with metadata written: 505380864 00:06:59.799 Media bytes erased: 0 00:06:59.799 00:06:59.799 FDP events log page 00:06:59.799 =================== 00:06:59.799 Number of FDP events: 0 00:06:59.799 00:06:59.799 NVM Specific Namespace Data 00:06:59.799 =========================== 00:06:59.799 Logical Block Storage Tag Mask: 0 00:06:59.799 Protection Information Capabilities: 00:06:59.799 16b Guard Protection Information Storage Tag Support: No 00:06:59.799 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:59.799 Storage Tag Check Read Support: No 00:06:59.799 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.799 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.799 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.799 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.799 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.799 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.799 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.800 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.800 ===================================================== 00:06:59.800 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:06:59.800 ===================================================== 00:06:59.800 Controller Capabilities/Features 00:06:59.800 ================================ 00:06:59.800 Vendor ID: 1b36 00:06:59.800 Subsystem Vendor ID: 1af4 00:06:59.800 Serial Number: 12340 00:06:59.800 Model Number: QEMU NVMe Ctrl 00:06:59.800 Firmware Version: 8.0.0 00:06:59.800 Recommended Arb Burst: 6 00:06:59.800 IEEE OUI Identifier: 00 54 52 00:06:59.800 Multi-path I/O 00:06:59.800 May have multiple subsystem ports: No 00:06:59.800 May have multiple controllers: No 00:06:59.800 Associated with SR-IOV VF: No 00:06:59.800 Max Data Transfer Size: 524288 00:06:59.800 Max Number of Namespaces: 256 00:06:59.800 Max Number of I/O Queues: 64 00:06:59.800 NVMe Specification Version (VS): 1.4 00:06:59.800 NVMe Specification Version (Identify): 1.4 00:06:59.800 Maximum Queue Entries: 2048 00:06:59.800 Contiguous Queues Required: Yes 00:06:59.800 Arbitration Mechanisms Supported 00:06:59.800 Weighted Round Robin: Not Supported 00:06:59.800 Vendor Specific: Not Supported 00:06:59.800 Reset Timeout: 7500 ms 00:06:59.800 Doorbell Stride: 4 bytes 00:06:59.800 NVM Subsystem Reset: Not Supported 00:06:59.800 Command Sets Supported 00:06:59.800 NVM Command Set: Supported 00:06:59.800 Boot Partition: Not Supported 00:06:59.800 Memory Page Size Minimum: 4096 bytes 00:06:59.800 Memory Page Size Maximum: 65536 bytes 00:06:59.800 Persistent Memory Region: Not Supported 00:06:59.800 Optional Asynchronous Events Supported 00:06:59.800 Namespace Attribute Notices: Supported 00:06:59.800 Firmware Activation Notices: Not Supported 00:06:59.800 ANA Change Notices: Not Supported 00:06:59.800 PLE Aggregate Log Change Notices: Not Supported 00:06:59.800 LBA Status Info Alert Notices: Not Supported 00:06:59.800 EGE Aggregate Log Change Notices: Not Supported 00:06:59.800 Normal NVM Subsystem Shutdown event: Not Supported 00:06:59.800 Zone Descriptor Change Notices: Not Supported 00:06:59.800 Discovery Log Change Notices: Not Supported 00:06:59.800 Controller Attributes 00:06:59.800 128-bit Host Identifier: Not Supported 00:06:59.800 Non-Operational Permissive Mode: Not Supported 00:06:59.800 NVM Sets: Not Supported 00:06:59.800 Read Recovery Levels: Not Supported 00:06:59.800 Endurance Groups: Not Supported 00:06:59.800 Predictable Latency Mode: Not Supported 00:06:59.800 Traffic Based Keep ALive: Not Supported 00:06:59.800 Namespace Granularity: Not Supported 00:06:59.800 SQ Associations: Not Supported 00:06:59.800 UUID List: Not Supported 00:06:59.800 Multi-Domain Subsystem: Not Supported 00:06:59.800 Fixed Capacity Management: Not Supported 00:06:59.800 Variable Capacity Management: Not Supported 00:06:59.800 Delete Endurance Group: Not Supported 00:06:59.800 Delete NVM Set: Not Supported 00:06:59.800 Extended LBA Formats Supported: Supported 00:06:59.800 Flexible Data Placement Supported: Not Supported 00:06:59.800 00:06:59.800 Controller Memory Buffer Support 00:06:59.800 ================================ 00:06:59.800 Supported: No 00:06:59.800 00:06:59.800 Persistent Memory Region Support 00:06:59.800 ================================ 00:06:59.800 Supported: No 00:06:59.800 00:06:59.800 Admin Command Set Attributes 00:06:59.800 ============================ 00:06:59.800 Security Send/Receive: Not Supported 00:06:59.800 Format NVM: Supported 00:06:59.800 Firmware Activate/Download: Not Supported 00:06:59.800 Namespace Management: Supported 00:06:59.800 Device Self-Test: Not Supported 00:06:59.800 Directives: Supported 00:06:59.800 NVMe-MI: Not Supported 00:06:59.800 Virtualization Management: Not Supported 00:06:59.800 Doorbell Buffer Config: Supported 00:06:59.800 Get LBA Status Capability: Not Supported 00:06:59.800 Command & Feature Lockdown Capability: Not Supported 00:06:59.800 Abort Command Limit: 4 00:06:59.800 Async Event Request Limit: 4 00:06:59.800 Number of Firmware Slots: N/A 00:06:59.800 Firmware Slot 1 Read-Only: N/A 00:06:59.800 Firmware Activation Without Reset: N/A 00:06:59.800 Multiple Update Detection Support: N/A 00:06:59.800 Firmware Update Granularity: No Information Provided 00:06:59.800 Per-Namespace SMART Log: Yes 00:06:59.800 Asymmetric Namespace Access Log Page: Not Supported 00:06:59.800 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:06:59.800 Command Effects Log Page: Supported 00:06:59.800 Get Log Page Extended Data: Supported 00:06:59.800 Telemetry Log Pages: Not Supported 00:06:59.800 Persistent Event Log Pages: Not Supported 00:06:59.800 Supported Log Pages Log Page: May Support 00:06:59.800 Commands Supported & Effects Log Page: Not Supported 00:06:59.800 Feature Identifiers & Effects Log Page:May Support 00:06:59.800 NVMe-MI Commands & Effects Log Page: May Support 00:06:59.800 Data Area 4 for Telemetry Log: Not Supported 00:06:59.800 Error Log Page Entries Supported: 1 00:06:59.800 Keep Alive: Not Supported 00:06:59.800 00:06:59.800 NVM Command Set Attributes 00:06:59.800 ========================== 00:06:59.800 Submission Queue Entry Size 00:06:59.800 Max: 64 00:06:59.800 Min: 64 00:06:59.800 Completion Queue Entry Size 00:06:59.800 Max: 16 00:06:59.800 Min: 16 00:06:59.800 Number of Namespaces: 256 00:06:59.800 Compare Command: Supported 00:06:59.800 Write Uncorrectable Command: Not Supported 00:06:59.800 Dataset Management Command: Supported 00:06:59.800 Write Zeroes Command: Supported 00:06:59.800 Set Features Save Field: Supported 00:06:59.800 Reservations: Not Supported 00:06:59.800 Timestamp: Supported 00:06:59.800 Copy: Supported 00:06:59.800 Volatile Write Cache: Present 00:06:59.800 Atomic Write Unit (Normal): 1 00:06:59.800 Atomic Write Unit (PFail): 1 00:06:59.800 Atomic Compare & Write Unit: 1 00:06:59.800 Fused Compare & Write: Not Supported 00:06:59.800 Scatter-Gather List 00:06:59.800 SGL Command Set: Supported 00:06:59.800 SGL Keyed: Not Supported 00:06:59.800 SGL Bit Bucket Descriptor: Not Supported 00:06:59.800 SGL Metadata Pointer: Not Supported 00:06:59.800 Oversized SGL: Not Supported 00:06:59.800 SGL Metadata Address: Not Supported 00:06:59.800 SGL Offset: Not Supported 00:06:59.800 Transport SGL Data Block: Not Supported 00:06:59.800 Replay Protected Memory Block: Not Supported 00:06:59.800 00:06:59.800 Firmware Slot Information 00:06:59.800 ========================= 00:06:59.800 Active slot: 1 00:06:59.800 Slot 1 Firmware Revision: 1.0 00:06:59.800 00:06:59.800 00:06:59.800 Commands Supported and Effects 00:06:59.800 ============================== 00:06:59.800 Admin Commands 00:06:59.800 -------------- 00:06:59.800 Delete I/O Submission Queue (00h): Supported 00:06:59.800 Create I/O Submission Queue (01h): Supported 00:06:59.800 Get Log Page (02h): Supported 00:06:59.800 Delete I/O Completion Queue (04h): Supported 00:06:59.800 Create I/O Completion Queue (05h): Supported 00:06:59.800 Identify (06h): Supported 00:06:59.800 Abort (08h): Supported 00:06:59.800 Set Features (09h): Supported 00:06:59.800 Get Features (0Ah): Supported 00:06:59.800 Asynchronous Event Request (0Ch): Supported 00:06:59.800 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:59.800 Directive Send (19h): Supported 00:06:59.800 Directive Receive (1Ah): Supported 00:06:59.800 Virtualization Management (1Ch): Supported 00:06:59.800 Doorbell Buffer Config (7Ch): Supported 00:06:59.800 Format NVM (80h): Supported LBA-Change 00:06:59.800 I/O Commands 00:06:59.800 ------------ 00:06:59.800 Flush (00h): Supported LBA-Change 00:06:59.800 Write (01h): Supported LBA-Change 00:06:59.800 Read (02h): Supported 00:06:59.800 Compare (05h): Supported 00:06:59.800 Write Zeroes (08h): Supported LBA-Change 00:06:59.800 Dataset Management (09h): Supported LBA-Change 00:06:59.800 Unknown (0Ch): Supported 00:06:59.800 Unknown (12h): Supported 00:06:59.800 Copy (19h): Supported LBA-Change 00:06:59.800 Unknown (1Dh): Supported LBA-Change 00:06:59.800 00:06:59.800 Error Log 00:06:59.800 ========= 00:06:59.800 00:06:59.800 Arbitration 00:06:59.800 =========== 00:06:59.800 Arbitration Burst: no limit 00:06:59.800 00:06:59.800 Power Management 00:06:59.800 ================ 00:06:59.800 Number of Power States: 1 00:06:59.800 Current Power State: Power State #0 00:06:59.800 Power State #0: 00:06:59.800 Max Power: 25.00 W 00:06:59.800 Non-Operational State: Operational 00:06:59.800 Entry Latency: 16 microseconds 00:06:59.800 Exit Latency: 4 microseconds 00:06:59.800 Relative Read Throughput: 0 00:06:59.800 Relative Read Latency: 0 00:06:59.800 Relative Write Throughput: 0 00:06:59.800 Relative Write Latency: 0 00:06:59.801 Idle Power: Not Reported 00:06:59.801 Active Power: Not Reported 00:06:59.801 Non-Operational Permissive Mode: Not Supported 00:06:59.801 00:06:59.801 Health Information 00:06:59.801 ================== 00:06:59.801 Critical Warnings: 00:06:59.801 Available Spare Space: OK 00:06:59.801 Temperature: OK 00:06:59.801 Device Reliability: OK 00:06:59.801 Read Only: No 00:06:59.801 Volatile Memory Backup: OK 00:06:59.801 Current Temperature: 323 Kelvin (50 Celsius) 00:06:59.801 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:59.801 Available Spare: 0% 00:06:59.801 Available Spare Threshold: 0% 00:06:59.801 Life Percentage Used: 0% 00:06:59.801 Data Units Read: 686 00:06:59.801 Data Units Written: 614 00:06:59.801 Host Read Commands: 40119 00:06:59.801 Host Write Commands: 39905 00:06:59.801 Controller Busy Time: 0 minutes 00:06:59.801 Power Cycles: 0 00:06:59.801 Power On Hours: 0 hours 00:06:59.801 Unsafe Shutdowns: 0 00:06:59.801 Unrecoverable Media Errors: 0 00:06:59.801 Lifetime Error Log Entries: 0 00:06:59.801 Warning Temperature Time: 0 minutes 00:06:59.801 Critical Temperature Time: 0 minutes 00:06:59.801 00:06:59.801 Number of Queues 00:06:59.801 ================ 00:06:59.801 Number of I/O Submission Queues: 64 00:06:59.801 Number of I/O Completion Queues: 64 00:06:59.801 00:06:59.801 ZNS Specific Controller Data 00:06:59.801 ============================ 00:06:59.801 Zone Append Size Limit: 0 00:06:59.801 00:06:59.801 00:06:59.801 Active Namespaces 00:06:59.801 ================= 00:06:59.801 Namespace ID:1 00:06:59.801 Error Recovery Timeout: Unlimited 00:06:59.801 Command Set Identifier: NVM (00h) 00:06:59.801 Deallocate: Supported 00:06:59.801 Deallocated/Unwritten Error: Supported 00:06:59.801 Deallocated Read Value: All 0x00 00:06:59.801 Deallocate in Write Zeroes: Not Supported 00:06:59.801 Deallocated Guard Field: 0xFFFF 00:06:59.801 Flush: Supported 00:06:59.801 Reservation: Not Supported 00:06:59.801 Metadata Transferred as: Separate Metadata Buffer 00:06:59.801 Namespace Sharing Capabilities: Private 00:06:59.801 Size (in LBAs): 1548666 (5GiB) 00:06:59.801 Capacity (in LBAs): 1548666 (5GiB) 00:06:59.801 Utilization (in LBAs): 1548666 (5GiB) 00:06:59.801 Thin Provisioning: Not Supported 00:06:59.801 Per-NS Atomic Units: No 00:06:59.801 Maximum Single Source Range Length: 128 00:06:59.801 Maximum Copy Length: 128 00:06:59.801 Maximum Source Range Count: 128 00:06:59.801 NGUID/EUI64 Never Reused: No 00:06:59.801 Namespace Write Protected: No 00:06:59.801 Number of LBA Formats: 8 00:06:59.801 Current LBA Format: [2024-12-05 23:44:32.438774] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62850 terminated unexpected 00:06:59.801 LBA Format #07 00:06:59.801 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:59.801 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:59.801 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:59.801 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:59.801 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:59.801 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:59.801 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:59.801 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:59.801 00:06:59.801 NVM Specific Namespace Data 00:06:59.801 =========================== 00:06:59.801 Logical Block Storage Tag Mask: 0 00:06:59.801 Protection Information Capabilities: 00:06:59.801 16b Guard Protection Information Storage Tag Support: No 00:06:59.801 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:59.801 Storage Tag Check Read Support: No 00:06:59.801 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.801 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.801 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.801 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.801 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.801 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.801 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.801 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.801 ===================================================== 00:06:59.801 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:06:59.801 ===================================================== 00:06:59.801 Controller Capabilities/Features 00:06:59.801 ================================ 00:06:59.801 Vendor ID: 1b36 00:06:59.801 Subsystem Vendor ID: 1af4 00:06:59.801 Serial Number: 12341 00:06:59.801 Model Number: QEMU NVMe Ctrl 00:06:59.801 Firmware Version: 8.0.0 00:06:59.801 Recommended Arb Burst: 6 00:06:59.801 IEEE OUI Identifier: 00 54 52 00:06:59.801 Multi-path I/O 00:06:59.801 May have multiple subsystem ports: No 00:06:59.801 May have multiple controllers: No 00:06:59.801 Associated with SR-IOV VF: No 00:06:59.801 Max Data Transfer Size: 524288 00:06:59.801 Max Number of Namespaces: 256 00:06:59.801 Max Number of I/O Queues: 64 00:06:59.801 NVMe Specification Version (VS): 1.4 00:06:59.801 NVMe Specification Version (Identify): 1.4 00:06:59.801 Maximum Queue Entries: 2048 00:06:59.801 Contiguous Queues Required: Yes 00:06:59.801 Arbitration Mechanisms Supported 00:06:59.801 Weighted Round Robin: Not Supported 00:06:59.801 Vendor Specific: Not Supported 00:06:59.801 Reset Timeout: 7500 ms 00:06:59.801 Doorbell Stride: 4 bytes 00:06:59.801 NVM Subsystem Reset: Not Supported 00:06:59.801 Command Sets Supported 00:06:59.801 NVM Command Set: Supported 00:06:59.801 Boot Partition: Not Supported 00:06:59.801 Memory Page Size Minimum: 4096 bytes 00:06:59.801 Memory Page Size Maximum: 65536 bytes 00:06:59.801 Persistent Memory Region: Not Supported 00:06:59.801 Optional Asynchronous Events Supported 00:06:59.801 Namespace Attribute Notices: Supported 00:06:59.801 Firmware Activation Notices: Not Supported 00:06:59.801 ANA Change Notices: Not Supported 00:06:59.801 PLE Aggregate Log Change Notices: Not Supported 00:06:59.801 LBA Status Info Alert Notices: Not Supported 00:06:59.801 EGE Aggregate Log Change Notices: Not Supported 00:06:59.801 Normal NVM Subsystem Shutdown event: Not Supported 00:06:59.801 Zone Descriptor Change Notices: Not Supported 00:06:59.801 Discovery Log Change Notices: Not Supported 00:06:59.801 Controller Attributes 00:06:59.801 128-bit Host Identifier: Not Supported 00:06:59.801 Non-Operational Permissive Mode: Not Supported 00:06:59.801 NVM Sets: Not Supported 00:06:59.801 Read Recovery Levels: Not Supported 00:06:59.801 Endurance Groups: Not Supported 00:06:59.801 Predictable Latency Mode: Not Supported 00:06:59.801 Traffic Based Keep ALive: Not Supported 00:06:59.801 Namespace Granularity: Not Supported 00:06:59.801 SQ Associations: Not Supported 00:06:59.801 UUID List: Not Supported 00:06:59.801 Multi-Domain Subsystem: Not Supported 00:06:59.801 Fixed Capacity Management: Not Supported 00:06:59.801 Variable Capacity Management: Not Supported 00:06:59.801 Delete Endurance Group: Not Supported 00:06:59.801 Delete NVM Set: Not Supported 00:06:59.801 Extended LBA Formats Supported: Supported 00:06:59.801 Flexible Data Placement Supported: Not Supported 00:06:59.801 00:06:59.801 Controller Memory Buffer Support 00:06:59.801 ================================ 00:06:59.801 Supported: No 00:06:59.801 00:06:59.801 Persistent Memory Region Support 00:06:59.801 ================================ 00:06:59.801 Supported: No 00:06:59.801 00:06:59.801 Admin Command Set Attributes 00:06:59.801 ============================ 00:06:59.801 Security Send/Receive: Not Supported 00:06:59.801 Format NVM: Supported 00:06:59.801 Firmware Activate/Download: Not Supported 00:06:59.801 Namespace Management: Supported 00:06:59.801 Device Self-Test: Not Supported 00:06:59.801 Directives: Supported 00:06:59.801 NVMe-MI: Not Supported 00:06:59.801 Virtualization Management: Not Supported 00:06:59.801 Doorbell Buffer Config: Supported 00:06:59.801 Get LBA Status Capability: Not Supported 00:06:59.801 Command & Feature Lockdown Capability: Not Supported 00:06:59.801 Abort Command Limit: 4 00:06:59.801 Async Event Request Limit: 4 00:06:59.801 Number of Firmware Slots: N/A 00:06:59.802 Firmware Slot 1 Read-Only: N/A 00:06:59.802 Firmware Activation Without Reset: N/A 00:06:59.802 Multiple Update Detection Support: N/A 00:06:59.802 Firmware Update Granularity: No Information Provided 00:06:59.802 Per-Namespace SMART Log: Yes 00:06:59.802 Asymmetric Namespace Access Log Page: Not Supported 00:06:59.802 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:06:59.802 Command Effects Log Page: Supported 00:06:59.802 Get Log Page Extended Data: Supported 00:06:59.802 Telemetry Log Pages: Not Supported 00:06:59.802 Persistent Event Log Pages: Not Supported 00:06:59.802 Supported Log Pages Log Page: May Support 00:06:59.802 Commands Supported & Effects Log Page: Not Supported 00:06:59.802 Feature Identifiers & Effects Log Page:May Support 00:06:59.802 NVMe-MI Commands & Effects Log Page: May Support 00:06:59.802 Data Area 4 for Telemetry Log: Not Supported 00:06:59.802 Error Log Page Entries Supported: 1 00:06:59.802 Keep Alive: Not Supported 00:06:59.802 00:06:59.802 NVM Command Set Attributes 00:06:59.802 ========================== 00:06:59.802 Submission Queue Entry Size 00:06:59.802 Max: 64 00:06:59.802 Min: 64 00:06:59.802 Completion Queue Entry Size 00:06:59.802 Max: 16 00:06:59.802 Min: 16 00:06:59.802 Number of Namespaces: 256 00:06:59.802 Compare Command: Supported 00:06:59.802 Write Uncorrectable Command: Not Supported 00:06:59.802 Dataset Management Command: Supported 00:06:59.802 Write Zeroes Command: Supported 00:06:59.802 Set Features Save Field: Supported 00:06:59.802 Reservations: Not Supported 00:06:59.802 Timestamp: Supported 00:06:59.802 Copy: Supported 00:06:59.802 Volatile Write Cache: Present 00:06:59.802 Atomic Write Unit (Normal): 1 00:06:59.802 Atomic Write Unit (PFail): 1 00:06:59.802 Atomic Compare & Write Unit: 1 00:06:59.802 Fused Compare & Write: Not Supported 00:06:59.802 Scatter-Gather List 00:06:59.802 SGL Command Set: Supported 00:06:59.802 SGL Keyed: Not Supported 00:06:59.802 SGL Bit Bucket Descriptor: Not Supported 00:06:59.802 SGL Metadata Pointer: Not Supported 00:06:59.802 Oversized SGL: Not Supported 00:06:59.802 SGL Metadata Address: Not Supported 00:06:59.802 SGL Offset: Not Supported 00:06:59.802 Transport SGL Data Block: Not Supported 00:06:59.802 Replay Protected Memory Block: Not Supported 00:06:59.802 00:06:59.802 Firmware Slot Information 00:06:59.802 ========================= 00:06:59.802 Active slot: 1 00:06:59.802 Slot 1 Firmware Revision: 1.0 00:06:59.802 00:06:59.802 00:06:59.802 Commands Supported and Effects 00:06:59.802 ============================== 00:06:59.802 Admin Commands 00:06:59.802 -------------- 00:06:59.802 Delete I/O Submission Queue (00h): Supported 00:06:59.802 Create I/O Submission Queue (01h): Supported 00:06:59.802 Get Log Page (02h): Supported 00:06:59.802 Delete I/O Completion Queue (04h): Supported 00:06:59.802 Create I/O Completion Queue (05h): Supported 00:06:59.802 Identify (06h): Supported 00:06:59.802 Abort (08h): Supported 00:06:59.802 Set Features (09h): Supported 00:06:59.802 Get Features (0Ah): Supported 00:06:59.802 Asynchronous Event Request (0Ch): Supported 00:06:59.802 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:59.802 Directive Send (19h): Supported 00:06:59.802 Directive Receive (1Ah): Supported 00:06:59.802 Virtualization Management (1Ch): Supported 00:06:59.802 Doorbell Buffer Config (7Ch): Supported 00:06:59.802 Format NVM (80h): Supported LBA-Change 00:06:59.802 I/O Commands 00:06:59.802 ------------ 00:06:59.802 Flush (00h): Supported LBA-Change 00:06:59.802 Write (01h): Supported LBA-Change 00:06:59.802 Read (02h): Supported 00:06:59.802 Compare (05h): Supported 00:06:59.802 Write Zeroes (08h): Supported LBA-Change 00:06:59.802 Dataset Management (09h): Supported LBA-Change 00:06:59.802 Unknown (0Ch): Supported 00:06:59.802 Unknown (12h): Supported 00:06:59.802 Copy (19h): Supported LBA-Change 00:06:59.802 Unknown (1Dh): Supported LBA-Change 00:06:59.802 00:06:59.802 Error Log 00:06:59.802 ========= 00:06:59.802 00:06:59.802 Arbitration 00:06:59.802 =========== 00:06:59.802 Arbitration Burst: no limit 00:06:59.802 00:06:59.802 Power Management 00:06:59.802 ================ 00:06:59.802 Number of Power States: 1 00:06:59.802 Current Power State: Power State #0 00:06:59.802 Power State #0: 00:06:59.802 Max Power: 25.00 W 00:06:59.802 Non-Operational State: Operational 00:06:59.802 Entry Latency: 16 microseconds 00:06:59.802 Exit Latency: 4 microseconds 00:06:59.802 Relative Read Throughput: 0 00:06:59.802 Relative Read Latency: 0 00:06:59.802 Relative Write Throughput: 0 00:06:59.802 Relative Write Latency: 0 00:06:59.802 Idle Power: Not Reported 00:06:59.802 Active Power: Not Reported 00:06:59.802 Non-Operational Permissive Mode: Not Supported 00:06:59.802 00:06:59.802 Health Information 00:06:59.802 ================== 00:06:59.802 Critical Warnings: 00:06:59.802 Available Spare Space: OK 00:06:59.802 Temperature: OK 00:06:59.802 Device Reliability: OK 00:06:59.802 Read Only: No 00:06:59.802 Volatile Memory Backup: OK 00:06:59.802 Current Temperature: 323 Kelvin (50 Celsius) 00:06:59.802 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:59.802 Available Spare: 0% 00:06:59.802 Available Spare Threshold: 0% 00:06:59.802 Life Percentage Used: 0% 00:06:59.802 Data Units Read: 1052 00:06:59.802 Data Units Written: 919 00:06:59.802 Host Read Commands: 59334 00:06:59.802 Host Write Commands: 58131 00:06:59.802 Controller Busy Time: 0 minutes 00:06:59.802 Power Cycles: 0 00:06:59.802 Power On Hours: 0 hours 00:06:59.802 Unsafe Shutdowns: 0 00:06:59.802 Unrecoverable Media Errors: 0 00:06:59.802 Lifetime Error Log Entries: 0 00:06:59.802 Warning Temperature Time: 0 minutes 00:06:59.802 Critical Temperature Time: 0 minutes 00:06:59.802 00:06:59.802 Number of Queues 00:06:59.802 ================ 00:06:59.802 Number of I/O Submission Queues: 64 00:06:59.802 Number of I/O Completion Queues: 64 00:06:59.802 00:06:59.802 ZNS Specific Controller Data 00:06:59.802 ============================ 00:06:59.802 Zone Append Size Limit: 0 00:06:59.802 00:06:59.802 00:06:59.802 Active Namespaces 00:06:59.802 ================= 00:06:59.802 Namespace ID:1 00:06:59.802 Error Recovery Timeout: Unlimited 00:06:59.802 Command Set Identifier: NVM (00h) 00:06:59.802 Deallocate: Supported 00:06:59.802 Deallocated/Unwritten Error: Supported 00:06:59.802 Deallocated Read Value: All 0x00 00:06:59.802 Deallocate in Write Zeroes: Not Supported 00:06:59.802 Deallocated Guard Field: 0xFFFF 00:06:59.802 Flush: Supported 00:06:59.802 Reservation: Not Supported 00:06:59.802 Namespace Sharing Capabilities: Private 00:06:59.802 Size (in LBAs): 1310720 (5GiB) 00:06:59.802 Capacity (in LBAs): 1310720 (5GiB) 00:06:59.802 Utilization (in LBAs): 1310720 (5GiB) 00:06:59.802 Thin Provisioning: Not Supported 00:06:59.802 Per-NS Atomic Units: No 00:06:59.802 Maximum Single Source Range Length: 128 00:06:59.802 Maximum Copy Length: 128 00:06:59.802 Maximum Source Range Count: 128 00:06:59.802 NGUID/EUI64 Never Reused: No 00:06:59.802 Namespace Write Protected: No 00:06:59.802 Number of LBA Formats: 8 00:06:59.802 Current LBA Format: LBA Format #04 00:06:59.802 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:59.802 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:59.802 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:59.802 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:59.802 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:59.802 LBA Forma[2024-12-05 23:44:32.439573] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62850 terminated unexpected 00:06:59.802 t #05: Data Size: 4096 Metadata Size: 8 00:06:59.802 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:59.802 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:59.802 00:06:59.802 NVM Specific Namespace Data 00:06:59.802 =========================== 00:06:59.802 Logical Block Storage Tag Mask: 0 00:06:59.802 Protection Information Capabilities: 00:06:59.802 16b Guard Protection Information Storage Tag Support: No 00:06:59.802 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:59.802 Storage Tag Check Read Support: No 00:06:59.802 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.802 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.802 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.802 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.802 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.802 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.802 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.802 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.802 ===================================================== 00:06:59.802 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:06:59.803 ===================================================== 00:06:59.803 Controller Capabilities/Features 00:06:59.803 ================================ 00:06:59.803 Vendor ID: 1b36 00:06:59.803 Subsystem Vendor ID: 1af4 00:06:59.803 Serial Number: 12342 00:06:59.803 Model Number: QEMU NVMe Ctrl 00:06:59.803 Firmware Version: 8.0.0 00:06:59.803 Recommended Arb Burst: 6 00:06:59.803 IEEE OUI Identifier: 00 54 52 00:06:59.803 Multi-path I/O 00:06:59.803 May have multiple subsystem ports: No 00:06:59.803 May have multiple controllers: No 00:06:59.803 Associated with SR-IOV VF: No 00:06:59.803 Max Data Transfer Size: 524288 00:06:59.803 Max Number of Namespaces: 256 00:06:59.803 Max Number of I/O Queues: 64 00:06:59.803 NVMe Specification Version (VS): 1.4 00:06:59.803 NVMe Specification Version (Identify): 1.4 00:06:59.803 Maximum Queue Entries: 2048 00:06:59.803 Contiguous Queues Required: Yes 00:06:59.803 Arbitration Mechanisms Supported 00:06:59.803 Weighted Round Robin: Not Supported 00:06:59.803 Vendor Specific: Not Supported 00:06:59.803 Reset Timeout: 7500 ms 00:06:59.803 Doorbell Stride: 4 bytes 00:06:59.803 NVM Subsystem Reset: Not Supported 00:06:59.803 Command Sets Supported 00:06:59.803 NVM Command Set: Supported 00:06:59.803 Boot Partition: Not Supported 00:06:59.803 Memory Page Size Minimum: 4096 bytes 00:06:59.803 Memory Page Size Maximum: 65536 bytes 00:06:59.803 Persistent Memory Region: Not Supported 00:06:59.803 Optional Asynchronous Events Supported 00:06:59.803 Namespace Attribute Notices: Supported 00:06:59.803 Firmware Activation Notices: Not Supported 00:06:59.803 ANA Change Notices: Not Supported 00:06:59.803 PLE Aggregate Log Change Notices: Not Supported 00:06:59.803 LBA Status Info Alert Notices: Not Supported 00:06:59.803 EGE Aggregate Log Change Notices: Not Supported 00:06:59.803 Normal NVM Subsystem Shutdown event: Not Supported 00:06:59.803 Zone Descriptor Change Notices: Not Supported 00:06:59.803 Discovery Log Change Notices: Not Supported 00:06:59.803 Controller Attributes 00:06:59.803 128-bit Host Identifier: Not Supported 00:06:59.803 Non-Operational Permissive Mode: Not Supported 00:06:59.803 NVM Sets: Not Supported 00:06:59.803 Read Recovery Levels: Not Supported 00:06:59.803 Endurance Groups: Not Supported 00:06:59.803 Predictable Latency Mode: Not Supported 00:06:59.803 Traffic Based Keep ALive: Not Supported 00:06:59.803 Namespace Granularity: Not Supported 00:06:59.803 SQ Associations: Not Supported 00:06:59.803 UUID List: Not Supported 00:06:59.803 Multi-Domain Subsystem: Not Supported 00:06:59.803 Fixed Capacity Management: Not Supported 00:06:59.803 Variable Capacity Management: Not Supported 00:06:59.803 Delete Endurance Group: Not Supported 00:06:59.803 Delete NVM Set: Not Supported 00:06:59.803 Extended LBA Formats Supported: Supported 00:06:59.803 Flexible Data Placement Supported: Not Supported 00:06:59.803 00:06:59.803 Controller Memory Buffer Support 00:06:59.803 ================================ 00:06:59.803 Supported: No 00:06:59.803 00:06:59.803 Persistent Memory Region Support 00:06:59.803 ================================ 00:06:59.803 Supported: No 00:06:59.803 00:06:59.803 Admin Command Set Attributes 00:06:59.803 ============================ 00:06:59.803 Security Send/Receive: Not Supported 00:06:59.803 Format NVM: Supported 00:06:59.803 Firmware Activate/Download: Not Supported 00:06:59.803 Namespace Management: Supported 00:06:59.803 Device Self-Test: Not Supported 00:06:59.803 Directives: Supported 00:06:59.803 NVMe-MI: Not Supported 00:06:59.803 Virtualization Management: Not Supported 00:06:59.803 Doorbell Buffer Config: Supported 00:06:59.803 Get LBA Status Capability: Not Supported 00:06:59.803 Command & Feature Lockdown Capability: Not Supported 00:06:59.803 Abort Command Limit: 4 00:06:59.803 Async Event Request Limit: 4 00:06:59.803 Number of Firmware Slots: N/A 00:06:59.803 Firmware Slot 1 Read-Only: N/A 00:06:59.803 Firmware Activation Without Reset: N/A 00:06:59.803 Multiple Update Detection Support: N/A 00:06:59.803 Firmware Update Granularity: No Information Provided 00:06:59.803 Per-Namespace SMART Log: Yes 00:06:59.803 Asymmetric Namespace Access Log Page: Not Supported 00:06:59.803 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:06:59.803 Command Effects Log Page: Supported 00:06:59.803 Get Log Page Extended Data: Supported 00:06:59.803 Telemetry Log Pages: Not Supported 00:06:59.803 Persistent Event Log Pages: Not Supported 00:06:59.803 Supported Log Pages Log Page: May Support 00:06:59.803 Commands Supported & Effects Log Page: Not Supported 00:06:59.803 Feature Identifiers & Effects Log Page:May Support 00:06:59.803 NVMe-MI Commands & Effects Log Page: May Support 00:06:59.803 Data Area 4 for Telemetry Log: Not Supported 00:06:59.803 Error Log Page Entries Supported: 1 00:06:59.803 Keep Alive: Not Supported 00:06:59.803 00:06:59.803 NVM Command Set Attributes 00:06:59.803 ========================== 00:06:59.803 Submission Queue Entry Size 00:06:59.803 Max: 64 00:06:59.803 Min: 64 00:06:59.803 Completion Queue Entry Size 00:06:59.803 Max: 16 00:06:59.803 Min: 16 00:06:59.803 Number of Namespaces: 256 00:06:59.803 Compare Command: Supported 00:06:59.803 Write Uncorrectable Command: Not Supported 00:06:59.803 Dataset Management Command: Supported 00:06:59.803 Write Zeroes Command: Supported 00:06:59.803 Set Features Save Field: Supported 00:06:59.803 Reservations: Not Supported 00:06:59.803 Timestamp: Supported 00:06:59.803 Copy: Supported 00:06:59.803 Volatile Write Cache: Present 00:06:59.803 Atomic Write Unit (Normal): 1 00:06:59.803 Atomic Write Unit (PFail): 1 00:06:59.803 Atomic Compare & Write Unit: 1 00:06:59.803 Fused Compare & Write: Not Supported 00:06:59.803 Scatter-Gather List 00:06:59.803 SGL Command Set: Supported 00:06:59.803 SGL Keyed: Not Supported 00:06:59.803 SGL Bit Bucket Descriptor: Not Supported 00:06:59.803 SGL Metadata Pointer: Not Supported 00:06:59.803 Oversized SGL: Not Supported 00:06:59.803 SGL Metadata Address: Not Supported 00:06:59.803 SGL Offset: Not Supported 00:06:59.803 Transport SGL Data Block: Not Supported 00:06:59.803 Replay Protected Memory Block: Not Supported 00:06:59.803 00:06:59.803 Firmware Slot Information 00:06:59.803 ========================= 00:06:59.803 Active slot: 1 00:06:59.803 Slot 1 Firmware Revision: 1.0 00:06:59.803 00:06:59.803 00:06:59.803 Commands Supported and Effects 00:06:59.803 ============================== 00:06:59.803 Admin Commands 00:06:59.803 -------------- 00:06:59.803 Delete I/O Submission Queue (00h): Supported 00:06:59.803 Create I/O Submission Queue (01h): Supported 00:06:59.803 Get Log Page (02h): Supported 00:06:59.803 Delete I/O Completion Queue (04h): Supported 00:06:59.803 Create I/O Completion Queue (05h): Supported 00:06:59.803 Identify (06h): Supported 00:06:59.803 Abort (08h): Supported 00:06:59.803 Set Features (09h): Supported 00:06:59.803 Get Features (0Ah): Supported 00:06:59.803 Asynchronous Event Request (0Ch): Supported 00:06:59.803 Namespace Attachment (15h): Supported NS-Inventory-Change 00:06:59.803 Directive Send (19h): Supported 00:06:59.803 Directive Receive (1Ah): Supported 00:06:59.803 Virtualization Management (1Ch): Supported 00:06:59.803 Doorbell Buffer Config (7Ch): Supported 00:06:59.804 Format NVM (80h): Supported LBA-Change 00:06:59.804 I/O Commands 00:06:59.804 ------------ 00:06:59.804 Flush (00h): Supported LBA-Change 00:06:59.804 Write (01h): Supported LBA-Change 00:06:59.804 Read (02h): Supported 00:06:59.804 Compare (05h): Supported 00:06:59.804 Write Zeroes (08h): Supported LBA-Change 00:06:59.804 Dataset Management (09h): Supported LBA-Change 00:06:59.804 Unknown (0Ch): Supported 00:06:59.804 Unknown (12h): Supported 00:06:59.804 Copy (19h): Supported LBA-Change 00:06:59.804 Unknown (1Dh): Supported LBA-Change 00:06:59.804 00:06:59.804 Error Log 00:06:59.804 ========= 00:06:59.804 00:06:59.804 Arbitration 00:06:59.804 =========== 00:06:59.804 Arbitration Burst: no limit 00:06:59.804 00:06:59.804 Power Management 00:06:59.804 ================ 00:06:59.804 Number of Power States: 1 00:06:59.804 Current Power State: Power State #0 00:06:59.804 Power State #0: 00:06:59.804 Max Power: 25.00 W 00:06:59.804 Non-Operational State: Operational 00:06:59.804 Entry Latency: 16 microseconds 00:06:59.804 Exit Latency: 4 microseconds 00:06:59.804 Relative Read Throughput: 0 00:06:59.804 Relative Read Latency: 0 00:06:59.804 Relative Write Throughput: 0 00:06:59.804 Relative Write Latency: 0 00:06:59.804 Idle Power: Not Reported 00:06:59.804 Active Power: Not Reported 00:06:59.804 Non-Operational Permissive Mode: Not Supported 00:06:59.804 00:06:59.804 Health Information 00:06:59.804 ================== 00:06:59.804 Critical Warnings: 00:06:59.804 Available Spare Space: OK 00:06:59.804 Temperature: OK 00:06:59.804 Device Reliability: OK 00:06:59.804 Read Only: No 00:06:59.804 Volatile Memory Backup: OK 00:06:59.804 Current Temperature: 323 Kelvin (50 Celsius) 00:06:59.804 Temperature Threshold: 343 Kelvin (70 Celsius) 00:06:59.804 Available Spare: 0% 00:06:59.804 Available Spare Threshold: 0% 00:06:59.804 Life Percentage Used: 0% 00:06:59.804 Data Units Read: 2219 00:06:59.804 Data Units Written: 2006 00:06:59.804 Host Read Commands: 122516 00:06:59.804 Host Write Commands: 120786 00:06:59.804 Controller Busy Time: 0 minutes 00:06:59.804 Power Cycles: 0 00:06:59.804 Power On Hours: 0 hours 00:06:59.804 Unsafe Shutdowns: 0 00:06:59.804 Unrecoverable Media Errors: 0 00:06:59.804 Lifetime Error Log Entries: 0 00:06:59.804 Warning Temperature Time: 0 minutes 00:06:59.804 Critical Temperature Time: 0 minutes 00:06:59.804 00:06:59.804 Number of Queues 00:06:59.804 ================ 00:06:59.804 Number of I/O Submission Queues: 64 00:06:59.804 Number of I/O Completion Queues: 64 00:06:59.804 00:06:59.804 ZNS Specific Controller Data 00:06:59.804 ============================ 00:06:59.804 Zone Append Size Limit: 0 00:06:59.804 00:06:59.804 00:06:59.804 Active Namespaces 00:06:59.804 ================= 00:06:59.804 Namespace ID:1 00:06:59.804 Error Recovery Timeout: Unlimited 00:06:59.804 Command Set Identifier: NVM (00h) 00:06:59.804 Deallocate: Supported 00:06:59.804 Deallocated/Unwritten Error: Supported 00:06:59.804 Deallocated Read Value: All 0x00 00:06:59.804 Deallocate in Write Zeroes: Not Supported 00:06:59.804 Deallocated Guard Field: 0xFFFF 00:06:59.804 Flush: Supported 00:06:59.804 Reservation: Not Supported 00:06:59.804 Namespace Sharing Capabilities: Private 00:06:59.804 Size (in LBAs): 1048576 (4GiB) 00:06:59.804 Capacity (in LBAs): 1048576 (4GiB) 00:06:59.804 Utilization (in LBAs): 1048576 (4GiB) 00:06:59.804 Thin Provisioning: Not Supported 00:06:59.804 Per-NS Atomic Units: No 00:06:59.804 Maximum Single Source Range Length: 128 00:06:59.804 Maximum Copy Length: 128 00:06:59.804 Maximum Source Range Count: 128 00:06:59.804 NGUID/EUI64 Never Reused: No 00:06:59.804 Namespace Write Protected: No 00:06:59.804 Number of LBA Formats: 8 00:06:59.804 Current LBA Format: LBA Format #04 00:06:59.804 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:59.804 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:59.804 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:59.804 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:59.804 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:59.804 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:59.804 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:59.804 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:59.804 00:06:59.804 NVM Specific Namespace Data 00:06:59.804 =========================== 00:06:59.804 Logical Block Storage Tag Mask: 0 00:06:59.804 Protection Information Capabilities: 00:06:59.804 16b Guard Protection Information Storage Tag Support: No 00:06:59.804 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:59.804 Storage Tag Check Read Support: No 00:06:59.804 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Namespace ID:2 00:06:59.804 Error Recovery Timeout: Unlimited 00:06:59.804 Command Set Identifier: NVM (00h) 00:06:59.804 Deallocate: Supported 00:06:59.804 Deallocated/Unwritten Error: Supported 00:06:59.804 Deallocated Read Value: All 0x00 00:06:59.804 Deallocate in Write Zeroes: Not Supported 00:06:59.804 Deallocated Guard Field: 0xFFFF 00:06:59.804 Flush: Supported 00:06:59.804 Reservation: Not Supported 00:06:59.804 Namespace Sharing Capabilities: Private 00:06:59.804 Size (in LBAs): 1048576 (4GiB) 00:06:59.804 Capacity (in LBAs): 1048576 (4GiB) 00:06:59.804 Utilization (in LBAs): 1048576 (4GiB) 00:06:59.804 Thin Provisioning: Not Supported 00:06:59.804 Per-NS Atomic Units: No 00:06:59.804 Maximum Single Source Range Length: 128 00:06:59.804 Maximum Copy Length: 128 00:06:59.804 Maximum Source Range Count: 128 00:06:59.804 NGUID/EUI64 Never Reused: No 00:06:59.804 Namespace Write Protected: No 00:06:59.804 Number of LBA Formats: 8 00:06:59.804 Current LBA Format: LBA Format #04 00:06:59.804 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:59.804 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:59.804 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:59.804 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:59.804 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:59.804 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:59.804 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:59.804 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:59.804 00:06:59.804 NVM Specific Namespace Data 00:06:59.804 =========================== 00:06:59.804 Logical Block Storage Tag Mask: 0 00:06:59.804 Protection Information Capabilities: 00:06:59.804 16b Guard Protection Information Storage Tag Support: No 00:06:59.804 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:59.804 Storage Tag Check Read Support: No 00:06:59.804 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.804 Namespace ID:3 00:06:59.804 Error Recovery Timeout: Unlimited 00:06:59.804 Command Set Identifier: NVM (00h) 00:06:59.804 Deallocate: Supported 00:06:59.804 Deallocated/Unwritten Error: Supported 00:06:59.804 Deallocated Read Value: All 0x00 00:06:59.804 Deallocate in Write Zeroes: Not Supported 00:06:59.804 Deallocated Guard Field: 0xFFFF 00:06:59.804 Flush: Supported 00:06:59.804 Reservation: Not Supported 00:06:59.804 Namespace Sharing Capabilities: Private 00:06:59.804 Size (in LBAs): 1048576 (4GiB) 00:06:59.804 Capacity (in LBAs): 1048576 (4GiB) 00:06:59.804 Utilization (in LBAs): 1048576 (4GiB) 00:06:59.804 Thin Provisioning: Not Supported 00:06:59.804 Per-NS Atomic Units: No 00:06:59.804 Maximum Single Source Range Length: 128 00:06:59.804 Maximum Copy Length: 128 00:06:59.804 Maximum Source Range Count: 128 00:06:59.804 NGUID/EUI64 Never Reused: No 00:06:59.804 Namespace Write Protected: No 00:06:59.804 Number of LBA Formats: 8 00:06:59.804 Current LBA Format: LBA Format #04 00:06:59.804 LBA Format #00: Data Size: 512 Metadata Size: 0 00:06:59.805 LBA Format #01: Data Size: 512 Metadata Size: 8 00:06:59.805 LBA Format #02: Data Size: 512 Metadata Size: 16 00:06:59.805 LBA Format #03: Data Size: 512 Metadata Size: 64 00:06:59.805 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:06:59.805 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:06:59.805 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:06:59.805 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:06:59.805 00:06:59.805 NVM Specific Namespace Data 00:06:59.805 =========================== 00:06:59.805 Logical Block Storage Tag Mask: 0 00:06:59.805 Protection Information Capabilities: 00:06:59.805 16b Guard Protection Information Storage Tag Support: No 00:06:59.805 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:06:59.805 Storage Tag Check Read Support: No 00:06:59.805 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.805 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.805 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.805 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.805 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.805 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.805 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.805 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:06:59.805 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:06:59.805 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:00.064 ===================================================== 00:07:00.064 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:00.064 ===================================================== 00:07:00.064 Controller Capabilities/Features 00:07:00.064 ================================ 00:07:00.064 Vendor ID: 1b36 00:07:00.064 Subsystem Vendor ID: 1af4 00:07:00.064 Serial Number: 12340 00:07:00.064 Model Number: QEMU NVMe Ctrl 00:07:00.064 Firmware Version: 8.0.0 00:07:00.064 Recommended Arb Burst: 6 00:07:00.064 IEEE OUI Identifier: 00 54 52 00:07:00.064 Multi-path I/O 00:07:00.064 May have multiple subsystem ports: No 00:07:00.064 May have multiple controllers: No 00:07:00.064 Associated with SR-IOV VF: No 00:07:00.064 Max Data Transfer Size: 524288 00:07:00.064 Max Number of Namespaces: 256 00:07:00.064 Max Number of I/O Queues: 64 00:07:00.064 NVMe Specification Version (VS): 1.4 00:07:00.064 NVMe Specification Version (Identify): 1.4 00:07:00.064 Maximum Queue Entries: 2048 00:07:00.064 Contiguous Queues Required: Yes 00:07:00.064 Arbitration Mechanisms Supported 00:07:00.064 Weighted Round Robin: Not Supported 00:07:00.064 Vendor Specific: Not Supported 00:07:00.064 Reset Timeout: 7500 ms 00:07:00.064 Doorbell Stride: 4 bytes 00:07:00.064 NVM Subsystem Reset: Not Supported 00:07:00.064 Command Sets Supported 00:07:00.064 NVM Command Set: Supported 00:07:00.064 Boot Partition: Not Supported 00:07:00.064 Memory Page Size Minimum: 4096 bytes 00:07:00.064 Memory Page Size Maximum: 65536 bytes 00:07:00.064 Persistent Memory Region: Not Supported 00:07:00.064 Optional Asynchronous Events Supported 00:07:00.064 Namespace Attribute Notices: Supported 00:07:00.064 Firmware Activation Notices: Not Supported 00:07:00.064 ANA Change Notices: Not Supported 00:07:00.064 PLE Aggregate Log Change Notices: Not Supported 00:07:00.064 LBA Status Info Alert Notices: Not Supported 00:07:00.064 EGE Aggregate Log Change Notices: Not Supported 00:07:00.064 Normal NVM Subsystem Shutdown event: Not Supported 00:07:00.064 Zone Descriptor Change Notices: Not Supported 00:07:00.064 Discovery Log Change Notices: Not Supported 00:07:00.064 Controller Attributes 00:07:00.064 128-bit Host Identifier: Not Supported 00:07:00.064 Non-Operational Permissive Mode: Not Supported 00:07:00.064 NVM Sets: Not Supported 00:07:00.064 Read Recovery Levels: Not Supported 00:07:00.064 Endurance Groups: Not Supported 00:07:00.064 Predictable Latency Mode: Not Supported 00:07:00.064 Traffic Based Keep ALive: Not Supported 00:07:00.064 Namespace Granularity: Not Supported 00:07:00.064 SQ Associations: Not Supported 00:07:00.064 UUID List: Not Supported 00:07:00.064 Multi-Domain Subsystem: Not Supported 00:07:00.064 Fixed Capacity Management: Not Supported 00:07:00.064 Variable Capacity Management: Not Supported 00:07:00.064 Delete Endurance Group: Not Supported 00:07:00.064 Delete NVM Set: Not Supported 00:07:00.064 Extended LBA Formats Supported: Supported 00:07:00.064 Flexible Data Placement Supported: Not Supported 00:07:00.064 00:07:00.064 Controller Memory Buffer Support 00:07:00.064 ================================ 00:07:00.064 Supported: No 00:07:00.064 00:07:00.064 Persistent Memory Region Support 00:07:00.064 ================================ 00:07:00.064 Supported: No 00:07:00.064 00:07:00.064 Admin Command Set Attributes 00:07:00.064 ============================ 00:07:00.064 Security Send/Receive: Not Supported 00:07:00.064 Format NVM: Supported 00:07:00.064 Firmware Activate/Download: Not Supported 00:07:00.064 Namespace Management: Supported 00:07:00.064 Device Self-Test: Not Supported 00:07:00.064 Directives: Supported 00:07:00.064 NVMe-MI: Not Supported 00:07:00.064 Virtualization Management: Not Supported 00:07:00.064 Doorbell Buffer Config: Supported 00:07:00.064 Get LBA Status Capability: Not Supported 00:07:00.064 Command & Feature Lockdown Capability: Not Supported 00:07:00.064 Abort Command Limit: 4 00:07:00.064 Async Event Request Limit: 4 00:07:00.064 Number of Firmware Slots: N/A 00:07:00.064 Firmware Slot 1 Read-Only: N/A 00:07:00.064 Firmware Activation Without Reset: N/A 00:07:00.064 Multiple Update Detection Support: N/A 00:07:00.064 Firmware Update Granularity: No Information Provided 00:07:00.064 Per-Namespace SMART Log: Yes 00:07:00.064 Asymmetric Namespace Access Log Page: Not Supported 00:07:00.064 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:00.064 Command Effects Log Page: Supported 00:07:00.064 Get Log Page Extended Data: Supported 00:07:00.064 Telemetry Log Pages: Not Supported 00:07:00.064 Persistent Event Log Pages: Not Supported 00:07:00.064 Supported Log Pages Log Page: May Support 00:07:00.064 Commands Supported & Effects Log Page: Not Supported 00:07:00.064 Feature Identifiers & Effects Log Page:May Support 00:07:00.064 NVMe-MI Commands & Effects Log Page: May Support 00:07:00.064 Data Area 4 for Telemetry Log: Not Supported 00:07:00.064 Error Log Page Entries Supported: 1 00:07:00.064 Keep Alive: Not Supported 00:07:00.064 00:07:00.064 NVM Command Set Attributes 00:07:00.064 ========================== 00:07:00.064 Submission Queue Entry Size 00:07:00.064 Max: 64 00:07:00.064 Min: 64 00:07:00.064 Completion Queue Entry Size 00:07:00.064 Max: 16 00:07:00.064 Min: 16 00:07:00.064 Number of Namespaces: 256 00:07:00.064 Compare Command: Supported 00:07:00.064 Write Uncorrectable Command: Not Supported 00:07:00.064 Dataset Management Command: Supported 00:07:00.064 Write Zeroes Command: Supported 00:07:00.064 Set Features Save Field: Supported 00:07:00.064 Reservations: Not Supported 00:07:00.064 Timestamp: Supported 00:07:00.064 Copy: Supported 00:07:00.064 Volatile Write Cache: Present 00:07:00.064 Atomic Write Unit (Normal): 1 00:07:00.064 Atomic Write Unit (PFail): 1 00:07:00.064 Atomic Compare & Write Unit: 1 00:07:00.064 Fused Compare & Write: Not Supported 00:07:00.064 Scatter-Gather List 00:07:00.064 SGL Command Set: Supported 00:07:00.064 SGL Keyed: Not Supported 00:07:00.064 SGL Bit Bucket Descriptor: Not Supported 00:07:00.065 SGL Metadata Pointer: Not Supported 00:07:00.065 Oversized SGL: Not Supported 00:07:00.065 SGL Metadata Address: Not Supported 00:07:00.065 SGL Offset: Not Supported 00:07:00.065 Transport SGL Data Block: Not Supported 00:07:00.065 Replay Protected Memory Block: Not Supported 00:07:00.065 00:07:00.065 Firmware Slot Information 00:07:00.065 ========================= 00:07:00.065 Active slot: 1 00:07:00.065 Slot 1 Firmware Revision: 1.0 00:07:00.065 00:07:00.065 00:07:00.065 Commands Supported and Effects 00:07:00.065 ============================== 00:07:00.065 Admin Commands 00:07:00.065 -------------- 00:07:00.065 Delete I/O Submission Queue (00h): Supported 00:07:00.065 Create I/O Submission Queue (01h): Supported 00:07:00.065 Get Log Page (02h): Supported 00:07:00.065 Delete I/O Completion Queue (04h): Supported 00:07:00.065 Create I/O Completion Queue (05h): Supported 00:07:00.065 Identify (06h): Supported 00:07:00.065 Abort (08h): Supported 00:07:00.065 Set Features (09h): Supported 00:07:00.065 Get Features (0Ah): Supported 00:07:00.065 Asynchronous Event Request (0Ch): Supported 00:07:00.065 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:00.065 Directive Send (19h): Supported 00:07:00.065 Directive Receive (1Ah): Supported 00:07:00.065 Virtualization Management (1Ch): Supported 00:07:00.065 Doorbell Buffer Config (7Ch): Supported 00:07:00.065 Format NVM (80h): Supported LBA-Change 00:07:00.065 I/O Commands 00:07:00.065 ------------ 00:07:00.065 Flush (00h): Supported LBA-Change 00:07:00.065 Write (01h): Supported LBA-Change 00:07:00.065 Read (02h): Supported 00:07:00.065 Compare (05h): Supported 00:07:00.065 Write Zeroes (08h): Supported LBA-Change 00:07:00.065 Dataset Management (09h): Supported LBA-Change 00:07:00.065 Unknown (0Ch): Supported 00:07:00.065 Unknown (12h): Supported 00:07:00.065 Copy (19h): Supported LBA-Change 00:07:00.065 Unknown (1Dh): Supported LBA-Change 00:07:00.065 00:07:00.065 Error Log 00:07:00.065 ========= 00:07:00.065 00:07:00.065 Arbitration 00:07:00.065 =========== 00:07:00.065 Arbitration Burst: no limit 00:07:00.065 00:07:00.065 Power Management 00:07:00.065 ================ 00:07:00.065 Number of Power States: 1 00:07:00.065 Current Power State: Power State #0 00:07:00.065 Power State #0: 00:07:00.065 Max Power: 25.00 W 00:07:00.065 Non-Operational State: Operational 00:07:00.065 Entry Latency: 16 microseconds 00:07:00.065 Exit Latency: 4 microseconds 00:07:00.065 Relative Read Throughput: 0 00:07:00.065 Relative Read Latency: 0 00:07:00.065 Relative Write Throughput: 0 00:07:00.065 Relative Write Latency: 0 00:07:00.065 Idle Power: Not Reported 00:07:00.065 Active Power: Not Reported 00:07:00.065 Non-Operational Permissive Mode: Not Supported 00:07:00.065 00:07:00.065 Health Information 00:07:00.065 ================== 00:07:00.065 Critical Warnings: 00:07:00.065 Available Spare Space: OK 00:07:00.065 Temperature: OK 00:07:00.065 Device Reliability: OK 00:07:00.065 Read Only: No 00:07:00.065 Volatile Memory Backup: OK 00:07:00.065 Current Temperature: 323 Kelvin (50 Celsius) 00:07:00.065 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:00.065 Available Spare: 0% 00:07:00.065 Available Spare Threshold: 0% 00:07:00.065 Life Percentage Used: 0% 00:07:00.065 Data Units Read: 686 00:07:00.065 Data Units Written: 614 00:07:00.065 Host Read Commands: 40119 00:07:00.065 Host Write Commands: 39905 00:07:00.065 Controller Busy Time: 0 minutes 00:07:00.065 Power Cycles: 0 00:07:00.065 Power On Hours: 0 hours 00:07:00.065 Unsafe Shutdowns: 0 00:07:00.065 Unrecoverable Media Errors: 0 00:07:00.065 Lifetime Error Log Entries: 0 00:07:00.065 Warning Temperature Time: 0 minutes 00:07:00.065 Critical Temperature Time: 0 minutes 00:07:00.065 00:07:00.065 Number of Queues 00:07:00.065 ================ 00:07:00.065 Number of I/O Submission Queues: 64 00:07:00.065 Number of I/O Completion Queues: 64 00:07:00.065 00:07:00.065 ZNS Specific Controller Data 00:07:00.065 ============================ 00:07:00.065 Zone Append Size Limit: 0 00:07:00.065 00:07:00.065 00:07:00.065 Active Namespaces 00:07:00.065 ================= 00:07:00.065 Namespace ID:1 00:07:00.065 Error Recovery Timeout: Unlimited 00:07:00.065 Command Set Identifier: NVM (00h) 00:07:00.065 Deallocate: Supported 00:07:00.065 Deallocated/Unwritten Error: Supported 00:07:00.065 Deallocated Read Value: All 0x00 00:07:00.065 Deallocate in Write Zeroes: Not Supported 00:07:00.065 Deallocated Guard Field: 0xFFFF 00:07:00.065 Flush: Supported 00:07:00.065 Reservation: Not Supported 00:07:00.065 Metadata Transferred as: Separate Metadata Buffer 00:07:00.065 Namespace Sharing Capabilities: Private 00:07:00.065 Size (in LBAs): 1548666 (5GiB) 00:07:00.065 Capacity (in LBAs): 1548666 (5GiB) 00:07:00.065 Utilization (in LBAs): 1548666 (5GiB) 00:07:00.065 Thin Provisioning: Not Supported 00:07:00.065 Per-NS Atomic Units: No 00:07:00.065 Maximum Single Source Range Length: 128 00:07:00.065 Maximum Copy Length: 128 00:07:00.065 Maximum Source Range Count: 128 00:07:00.065 NGUID/EUI64 Never Reused: No 00:07:00.065 Namespace Write Protected: No 00:07:00.065 Number of LBA Formats: 8 00:07:00.065 Current LBA Format: LBA Format #07 00:07:00.065 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:00.065 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:00.065 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:00.065 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:00.065 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:00.065 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:00.065 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:00.065 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:00.065 00:07:00.065 NVM Specific Namespace Data 00:07:00.065 =========================== 00:07:00.065 Logical Block Storage Tag Mask: 0 00:07:00.065 Protection Information Capabilities: 00:07:00.065 16b Guard Protection Information Storage Tag Support: No 00:07:00.065 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:00.065 Storage Tag Check Read Support: No 00:07:00.065 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.065 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.065 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.065 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.065 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.065 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.065 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.065 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.065 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:00.065 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:00.324 ===================================================== 00:07:00.325 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:00.325 ===================================================== 00:07:00.325 Controller Capabilities/Features 00:07:00.325 ================================ 00:07:00.325 Vendor ID: 1b36 00:07:00.325 Subsystem Vendor ID: 1af4 00:07:00.325 Serial Number: 12341 00:07:00.325 Model Number: QEMU NVMe Ctrl 00:07:00.325 Firmware Version: 8.0.0 00:07:00.325 Recommended Arb Burst: 6 00:07:00.325 IEEE OUI Identifier: 00 54 52 00:07:00.325 Multi-path I/O 00:07:00.325 May have multiple subsystem ports: No 00:07:00.325 May have multiple controllers: No 00:07:00.325 Associated with SR-IOV VF: No 00:07:00.325 Max Data Transfer Size: 524288 00:07:00.325 Max Number of Namespaces: 256 00:07:00.325 Max Number of I/O Queues: 64 00:07:00.325 NVMe Specification Version (VS): 1.4 00:07:00.325 NVMe Specification Version (Identify): 1.4 00:07:00.325 Maximum Queue Entries: 2048 00:07:00.325 Contiguous Queues Required: Yes 00:07:00.325 Arbitration Mechanisms Supported 00:07:00.325 Weighted Round Robin: Not Supported 00:07:00.325 Vendor Specific: Not Supported 00:07:00.325 Reset Timeout: 7500 ms 00:07:00.325 Doorbell Stride: 4 bytes 00:07:00.325 NVM Subsystem Reset: Not Supported 00:07:00.325 Command Sets Supported 00:07:00.325 NVM Command Set: Supported 00:07:00.325 Boot Partition: Not Supported 00:07:00.325 Memory Page Size Minimum: 4096 bytes 00:07:00.325 Memory Page Size Maximum: 65536 bytes 00:07:00.325 Persistent Memory Region: Not Supported 00:07:00.325 Optional Asynchronous Events Supported 00:07:00.325 Namespace Attribute Notices: Supported 00:07:00.325 Firmware Activation Notices: Not Supported 00:07:00.325 ANA Change Notices: Not Supported 00:07:00.325 PLE Aggregate Log Change Notices: Not Supported 00:07:00.325 LBA Status Info Alert Notices: Not Supported 00:07:00.325 EGE Aggregate Log Change Notices: Not Supported 00:07:00.325 Normal NVM Subsystem Shutdown event: Not Supported 00:07:00.325 Zone Descriptor Change Notices: Not Supported 00:07:00.325 Discovery Log Change Notices: Not Supported 00:07:00.325 Controller Attributes 00:07:00.325 128-bit Host Identifier: Not Supported 00:07:00.325 Non-Operational Permissive Mode: Not Supported 00:07:00.325 NVM Sets: Not Supported 00:07:00.325 Read Recovery Levels: Not Supported 00:07:00.325 Endurance Groups: Not Supported 00:07:00.325 Predictable Latency Mode: Not Supported 00:07:00.325 Traffic Based Keep ALive: Not Supported 00:07:00.325 Namespace Granularity: Not Supported 00:07:00.325 SQ Associations: Not Supported 00:07:00.325 UUID List: Not Supported 00:07:00.325 Multi-Domain Subsystem: Not Supported 00:07:00.325 Fixed Capacity Management: Not Supported 00:07:00.325 Variable Capacity Management: Not Supported 00:07:00.325 Delete Endurance Group: Not Supported 00:07:00.325 Delete NVM Set: Not Supported 00:07:00.325 Extended LBA Formats Supported: Supported 00:07:00.325 Flexible Data Placement Supported: Not Supported 00:07:00.325 00:07:00.325 Controller Memory Buffer Support 00:07:00.325 ================================ 00:07:00.325 Supported: No 00:07:00.325 00:07:00.325 Persistent Memory Region Support 00:07:00.325 ================================ 00:07:00.325 Supported: No 00:07:00.325 00:07:00.325 Admin Command Set Attributes 00:07:00.325 ============================ 00:07:00.325 Security Send/Receive: Not Supported 00:07:00.325 Format NVM: Supported 00:07:00.325 Firmware Activate/Download: Not Supported 00:07:00.325 Namespace Management: Supported 00:07:00.325 Device Self-Test: Not Supported 00:07:00.325 Directives: Supported 00:07:00.325 NVMe-MI: Not Supported 00:07:00.325 Virtualization Management: Not Supported 00:07:00.325 Doorbell Buffer Config: Supported 00:07:00.325 Get LBA Status Capability: Not Supported 00:07:00.325 Command & Feature Lockdown Capability: Not Supported 00:07:00.325 Abort Command Limit: 4 00:07:00.325 Async Event Request Limit: 4 00:07:00.325 Number of Firmware Slots: N/A 00:07:00.325 Firmware Slot 1 Read-Only: N/A 00:07:00.325 Firmware Activation Without Reset: N/A 00:07:00.325 Multiple Update Detection Support: N/A 00:07:00.325 Firmware Update Granularity: No Information Provided 00:07:00.325 Per-Namespace SMART Log: Yes 00:07:00.325 Asymmetric Namespace Access Log Page: Not Supported 00:07:00.325 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:00.325 Command Effects Log Page: Supported 00:07:00.325 Get Log Page Extended Data: Supported 00:07:00.325 Telemetry Log Pages: Not Supported 00:07:00.325 Persistent Event Log Pages: Not Supported 00:07:00.325 Supported Log Pages Log Page: May Support 00:07:00.325 Commands Supported & Effects Log Page: Not Supported 00:07:00.325 Feature Identifiers & Effects Log Page:May Support 00:07:00.325 NVMe-MI Commands & Effects Log Page: May Support 00:07:00.325 Data Area 4 for Telemetry Log: Not Supported 00:07:00.325 Error Log Page Entries Supported: 1 00:07:00.325 Keep Alive: Not Supported 00:07:00.325 00:07:00.325 NVM Command Set Attributes 00:07:00.325 ========================== 00:07:00.325 Submission Queue Entry Size 00:07:00.325 Max: 64 00:07:00.325 Min: 64 00:07:00.325 Completion Queue Entry Size 00:07:00.325 Max: 16 00:07:00.325 Min: 16 00:07:00.325 Number of Namespaces: 256 00:07:00.325 Compare Command: Supported 00:07:00.325 Write Uncorrectable Command: Not Supported 00:07:00.325 Dataset Management Command: Supported 00:07:00.325 Write Zeroes Command: Supported 00:07:00.325 Set Features Save Field: Supported 00:07:00.325 Reservations: Not Supported 00:07:00.325 Timestamp: Supported 00:07:00.325 Copy: Supported 00:07:00.325 Volatile Write Cache: Present 00:07:00.325 Atomic Write Unit (Normal): 1 00:07:00.325 Atomic Write Unit (PFail): 1 00:07:00.325 Atomic Compare & Write Unit: 1 00:07:00.325 Fused Compare & Write: Not Supported 00:07:00.325 Scatter-Gather List 00:07:00.325 SGL Command Set: Supported 00:07:00.325 SGL Keyed: Not Supported 00:07:00.325 SGL Bit Bucket Descriptor: Not Supported 00:07:00.325 SGL Metadata Pointer: Not Supported 00:07:00.325 Oversized SGL: Not Supported 00:07:00.325 SGL Metadata Address: Not Supported 00:07:00.325 SGL Offset: Not Supported 00:07:00.325 Transport SGL Data Block: Not Supported 00:07:00.325 Replay Protected Memory Block: Not Supported 00:07:00.325 00:07:00.325 Firmware Slot Information 00:07:00.325 ========================= 00:07:00.325 Active slot: 1 00:07:00.325 Slot 1 Firmware Revision: 1.0 00:07:00.325 00:07:00.325 00:07:00.325 Commands Supported and Effects 00:07:00.325 ============================== 00:07:00.325 Admin Commands 00:07:00.325 -------------- 00:07:00.325 Delete I/O Submission Queue (00h): Supported 00:07:00.325 Create I/O Submission Queue (01h): Supported 00:07:00.325 Get Log Page (02h): Supported 00:07:00.325 Delete I/O Completion Queue (04h): Supported 00:07:00.325 Create I/O Completion Queue (05h): Supported 00:07:00.325 Identify (06h): Supported 00:07:00.325 Abort (08h): Supported 00:07:00.325 Set Features (09h): Supported 00:07:00.325 Get Features (0Ah): Supported 00:07:00.325 Asynchronous Event Request (0Ch): Supported 00:07:00.325 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:00.325 Directive Send (19h): Supported 00:07:00.325 Directive Receive (1Ah): Supported 00:07:00.325 Virtualization Management (1Ch): Supported 00:07:00.325 Doorbell Buffer Config (7Ch): Supported 00:07:00.325 Format NVM (80h): Supported LBA-Change 00:07:00.325 I/O Commands 00:07:00.325 ------------ 00:07:00.325 Flush (00h): Supported LBA-Change 00:07:00.325 Write (01h): Supported LBA-Change 00:07:00.325 Read (02h): Supported 00:07:00.325 Compare (05h): Supported 00:07:00.325 Write Zeroes (08h): Supported LBA-Change 00:07:00.325 Dataset Management (09h): Supported LBA-Change 00:07:00.325 Unknown (0Ch): Supported 00:07:00.325 Unknown (12h): Supported 00:07:00.325 Copy (19h): Supported LBA-Change 00:07:00.325 Unknown (1Dh): Supported LBA-Change 00:07:00.325 00:07:00.325 Error Log 00:07:00.325 ========= 00:07:00.325 00:07:00.325 Arbitration 00:07:00.325 =========== 00:07:00.325 Arbitration Burst: no limit 00:07:00.325 00:07:00.325 Power Management 00:07:00.325 ================ 00:07:00.325 Number of Power States: 1 00:07:00.325 Current Power State: Power State #0 00:07:00.325 Power State #0: 00:07:00.325 Max Power: 25.00 W 00:07:00.325 Non-Operational State: Operational 00:07:00.325 Entry Latency: 16 microseconds 00:07:00.325 Exit Latency: 4 microseconds 00:07:00.325 Relative Read Throughput: 0 00:07:00.325 Relative Read Latency: 0 00:07:00.325 Relative Write Throughput: 0 00:07:00.326 Relative Write Latency: 0 00:07:00.326 Idle Power: Not Reported 00:07:00.326 Active Power: Not Reported 00:07:00.326 Non-Operational Permissive Mode: Not Supported 00:07:00.326 00:07:00.326 Health Information 00:07:00.326 ================== 00:07:00.326 Critical Warnings: 00:07:00.326 Available Spare Space: OK 00:07:00.326 Temperature: OK 00:07:00.326 Device Reliability: OK 00:07:00.326 Read Only: No 00:07:00.326 Volatile Memory Backup: OK 00:07:00.326 Current Temperature: 323 Kelvin (50 Celsius) 00:07:00.326 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:00.326 Available Spare: 0% 00:07:00.326 Available Spare Threshold: 0% 00:07:00.326 Life Percentage Used: 0% 00:07:00.326 Data Units Read: 1052 00:07:00.326 Data Units Written: 919 00:07:00.326 Host Read Commands: 59334 00:07:00.326 Host Write Commands: 58131 00:07:00.326 Controller Busy Time: 0 minutes 00:07:00.326 Power Cycles: 0 00:07:00.326 Power On Hours: 0 hours 00:07:00.326 Unsafe Shutdowns: 0 00:07:00.326 Unrecoverable Media Errors: 0 00:07:00.326 Lifetime Error Log Entries: 0 00:07:00.326 Warning Temperature Time: 0 minutes 00:07:00.326 Critical Temperature Time: 0 minutes 00:07:00.326 00:07:00.326 Number of Queues 00:07:00.326 ================ 00:07:00.326 Number of I/O Submission Queues: 64 00:07:00.326 Number of I/O Completion Queues: 64 00:07:00.326 00:07:00.326 ZNS Specific Controller Data 00:07:00.326 ============================ 00:07:00.326 Zone Append Size Limit: 0 00:07:00.326 00:07:00.326 00:07:00.326 Active Namespaces 00:07:00.326 ================= 00:07:00.326 Namespace ID:1 00:07:00.326 Error Recovery Timeout: Unlimited 00:07:00.326 Command Set Identifier: NVM (00h) 00:07:00.326 Deallocate: Supported 00:07:00.326 Deallocated/Unwritten Error: Supported 00:07:00.326 Deallocated Read Value: All 0x00 00:07:00.326 Deallocate in Write Zeroes: Not Supported 00:07:00.326 Deallocated Guard Field: 0xFFFF 00:07:00.326 Flush: Supported 00:07:00.326 Reservation: Not Supported 00:07:00.326 Namespace Sharing Capabilities: Private 00:07:00.326 Size (in LBAs): 1310720 (5GiB) 00:07:00.326 Capacity (in LBAs): 1310720 (5GiB) 00:07:00.326 Utilization (in LBAs): 1310720 (5GiB) 00:07:00.326 Thin Provisioning: Not Supported 00:07:00.326 Per-NS Atomic Units: No 00:07:00.326 Maximum Single Source Range Length: 128 00:07:00.326 Maximum Copy Length: 128 00:07:00.326 Maximum Source Range Count: 128 00:07:00.326 NGUID/EUI64 Never Reused: No 00:07:00.326 Namespace Write Protected: No 00:07:00.326 Number of LBA Formats: 8 00:07:00.326 Current LBA Format: LBA Format #04 00:07:00.326 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:00.326 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:00.326 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:00.326 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:00.326 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:00.326 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:00.326 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:00.326 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:00.326 00:07:00.326 NVM Specific Namespace Data 00:07:00.326 =========================== 00:07:00.326 Logical Block Storage Tag Mask: 0 00:07:00.326 Protection Information Capabilities: 00:07:00.326 16b Guard Protection Information Storage Tag Support: No 00:07:00.326 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:00.326 Storage Tag Check Read Support: No 00:07:00.326 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.326 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.326 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.326 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.326 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.326 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.326 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.326 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.326 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:00.326 23:44:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:00.586 ===================================================== 00:07:00.586 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:00.586 ===================================================== 00:07:00.586 Controller Capabilities/Features 00:07:00.586 ================================ 00:07:00.586 Vendor ID: 1b36 00:07:00.586 Subsystem Vendor ID: 1af4 00:07:00.586 Serial Number: 12342 00:07:00.586 Model Number: QEMU NVMe Ctrl 00:07:00.586 Firmware Version: 8.0.0 00:07:00.586 Recommended Arb Burst: 6 00:07:00.586 IEEE OUI Identifier: 00 54 52 00:07:00.586 Multi-path I/O 00:07:00.586 May have multiple subsystem ports: No 00:07:00.586 May have multiple controllers: No 00:07:00.586 Associated with SR-IOV VF: No 00:07:00.586 Max Data Transfer Size: 524288 00:07:00.586 Max Number of Namespaces: 256 00:07:00.586 Max Number of I/O Queues: 64 00:07:00.586 NVMe Specification Version (VS): 1.4 00:07:00.586 NVMe Specification Version (Identify): 1.4 00:07:00.586 Maximum Queue Entries: 2048 00:07:00.586 Contiguous Queues Required: Yes 00:07:00.586 Arbitration Mechanisms Supported 00:07:00.586 Weighted Round Robin: Not Supported 00:07:00.586 Vendor Specific: Not Supported 00:07:00.586 Reset Timeout: 7500 ms 00:07:00.586 Doorbell Stride: 4 bytes 00:07:00.586 NVM Subsystem Reset: Not Supported 00:07:00.586 Command Sets Supported 00:07:00.586 NVM Command Set: Supported 00:07:00.586 Boot Partition: Not Supported 00:07:00.586 Memory Page Size Minimum: 4096 bytes 00:07:00.586 Memory Page Size Maximum: 65536 bytes 00:07:00.586 Persistent Memory Region: Not Supported 00:07:00.586 Optional Asynchronous Events Supported 00:07:00.586 Namespace Attribute Notices: Supported 00:07:00.586 Firmware Activation Notices: Not Supported 00:07:00.586 ANA Change Notices: Not Supported 00:07:00.586 PLE Aggregate Log Change Notices: Not Supported 00:07:00.586 LBA Status Info Alert Notices: Not Supported 00:07:00.586 EGE Aggregate Log Change Notices: Not Supported 00:07:00.586 Normal NVM Subsystem Shutdown event: Not Supported 00:07:00.586 Zone Descriptor Change Notices: Not Supported 00:07:00.586 Discovery Log Change Notices: Not Supported 00:07:00.586 Controller Attributes 00:07:00.586 128-bit Host Identifier: Not Supported 00:07:00.586 Non-Operational Permissive Mode: Not Supported 00:07:00.586 NVM Sets: Not Supported 00:07:00.586 Read Recovery Levels: Not Supported 00:07:00.586 Endurance Groups: Not Supported 00:07:00.586 Predictable Latency Mode: Not Supported 00:07:00.586 Traffic Based Keep ALive: Not Supported 00:07:00.586 Namespace Granularity: Not Supported 00:07:00.586 SQ Associations: Not Supported 00:07:00.586 UUID List: Not Supported 00:07:00.586 Multi-Domain Subsystem: Not Supported 00:07:00.586 Fixed Capacity Management: Not Supported 00:07:00.586 Variable Capacity Management: Not Supported 00:07:00.586 Delete Endurance Group: Not Supported 00:07:00.586 Delete NVM Set: Not Supported 00:07:00.586 Extended LBA Formats Supported: Supported 00:07:00.586 Flexible Data Placement Supported: Not Supported 00:07:00.586 00:07:00.586 Controller Memory Buffer Support 00:07:00.586 ================================ 00:07:00.586 Supported: No 00:07:00.586 00:07:00.586 Persistent Memory Region Support 00:07:00.586 ================================ 00:07:00.586 Supported: No 00:07:00.586 00:07:00.586 Admin Command Set Attributes 00:07:00.586 ============================ 00:07:00.586 Security Send/Receive: Not Supported 00:07:00.586 Format NVM: Supported 00:07:00.586 Firmware Activate/Download: Not Supported 00:07:00.586 Namespace Management: Supported 00:07:00.586 Device Self-Test: Not Supported 00:07:00.586 Directives: Supported 00:07:00.586 NVMe-MI: Not Supported 00:07:00.586 Virtualization Management: Not Supported 00:07:00.586 Doorbell Buffer Config: Supported 00:07:00.586 Get LBA Status Capability: Not Supported 00:07:00.586 Command & Feature Lockdown Capability: Not Supported 00:07:00.586 Abort Command Limit: 4 00:07:00.586 Async Event Request Limit: 4 00:07:00.586 Number of Firmware Slots: N/A 00:07:00.586 Firmware Slot 1 Read-Only: N/A 00:07:00.586 Firmware Activation Without Reset: N/A 00:07:00.586 Multiple Update Detection Support: N/A 00:07:00.586 Firmware Update Granularity: No Information Provided 00:07:00.586 Per-Namespace SMART Log: Yes 00:07:00.586 Asymmetric Namespace Access Log Page: Not Supported 00:07:00.586 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:00.586 Command Effects Log Page: Supported 00:07:00.586 Get Log Page Extended Data: Supported 00:07:00.586 Telemetry Log Pages: Not Supported 00:07:00.586 Persistent Event Log Pages: Not Supported 00:07:00.586 Supported Log Pages Log Page: May Support 00:07:00.586 Commands Supported & Effects Log Page: Not Supported 00:07:00.586 Feature Identifiers & Effects Log Page:May Support 00:07:00.586 NVMe-MI Commands & Effects Log Page: May Support 00:07:00.586 Data Area 4 for Telemetry Log: Not Supported 00:07:00.586 Error Log Page Entries Supported: 1 00:07:00.586 Keep Alive: Not Supported 00:07:00.586 00:07:00.586 NVM Command Set Attributes 00:07:00.586 ========================== 00:07:00.586 Submission Queue Entry Size 00:07:00.586 Max: 64 00:07:00.586 Min: 64 00:07:00.586 Completion Queue Entry Size 00:07:00.586 Max: 16 00:07:00.586 Min: 16 00:07:00.586 Number of Namespaces: 256 00:07:00.586 Compare Command: Supported 00:07:00.586 Write Uncorrectable Command: Not Supported 00:07:00.586 Dataset Management Command: Supported 00:07:00.586 Write Zeroes Command: Supported 00:07:00.586 Set Features Save Field: Supported 00:07:00.586 Reservations: Not Supported 00:07:00.586 Timestamp: Supported 00:07:00.586 Copy: Supported 00:07:00.586 Volatile Write Cache: Present 00:07:00.586 Atomic Write Unit (Normal): 1 00:07:00.586 Atomic Write Unit (PFail): 1 00:07:00.586 Atomic Compare & Write Unit: 1 00:07:00.586 Fused Compare & Write: Not Supported 00:07:00.586 Scatter-Gather List 00:07:00.586 SGL Command Set: Supported 00:07:00.586 SGL Keyed: Not Supported 00:07:00.586 SGL Bit Bucket Descriptor: Not Supported 00:07:00.586 SGL Metadata Pointer: Not Supported 00:07:00.586 Oversized SGL: Not Supported 00:07:00.586 SGL Metadata Address: Not Supported 00:07:00.586 SGL Offset: Not Supported 00:07:00.586 Transport SGL Data Block: Not Supported 00:07:00.586 Replay Protected Memory Block: Not Supported 00:07:00.586 00:07:00.586 Firmware Slot Information 00:07:00.586 ========================= 00:07:00.586 Active slot: 1 00:07:00.586 Slot 1 Firmware Revision: 1.0 00:07:00.586 00:07:00.586 00:07:00.586 Commands Supported and Effects 00:07:00.586 ============================== 00:07:00.586 Admin Commands 00:07:00.586 -------------- 00:07:00.586 Delete I/O Submission Queue (00h): Supported 00:07:00.586 Create I/O Submission Queue (01h): Supported 00:07:00.586 Get Log Page (02h): Supported 00:07:00.586 Delete I/O Completion Queue (04h): Supported 00:07:00.586 Create I/O Completion Queue (05h): Supported 00:07:00.586 Identify (06h): Supported 00:07:00.586 Abort (08h): Supported 00:07:00.586 Set Features (09h): Supported 00:07:00.586 Get Features (0Ah): Supported 00:07:00.586 Asynchronous Event Request (0Ch): Supported 00:07:00.586 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:00.587 Directive Send (19h): Supported 00:07:00.587 Directive Receive (1Ah): Supported 00:07:00.587 Virtualization Management (1Ch): Supported 00:07:00.587 Doorbell Buffer Config (7Ch): Supported 00:07:00.587 Format NVM (80h): Supported LBA-Change 00:07:00.587 I/O Commands 00:07:00.587 ------------ 00:07:00.587 Flush (00h): Supported LBA-Change 00:07:00.587 Write (01h): Supported LBA-Change 00:07:00.587 Read (02h): Supported 00:07:00.587 Compare (05h): Supported 00:07:00.587 Write Zeroes (08h): Supported LBA-Change 00:07:00.587 Dataset Management (09h): Supported LBA-Change 00:07:00.587 Unknown (0Ch): Supported 00:07:00.587 Unknown (12h): Supported 00:07:00.587 Copy (19h): Supported LBA-Change 00:07:00.587 Unknown (1Dh): Supported LBA-Change 00:07:00.587 00:07:00.587 Error Log 00:07:00.587 ========= 00:07:00.587 00:07:00.587 Arbitration 00:07:00.587 =========== 00:07:00.587 Arbitration Burst: no limit 00:07:00.587 00:07:00.587 Power Management 00:07:00.587 ================ 00:07:00.587 Number of Power States: 1 00:07:00.587 Current Power State: Power State #0 00:07:00.587 Power State #0: 00:07:00.587 Max Power: 25.00 W 00:07:00.587 Non-Operational State: Operational 00:07:00.587 Entry Latency: 16 microseconds 00:07:00.587 Exit Latency: 4 microseconds 00:07:00.587 Relative Read Throughput: 0 00:07:00.587 Relative Read Latency: 0 00:07:00.587 Relative Write Throughput: 0 00:07:00.587 Relative Write Latency: 0 00:07:00.587 Idle Power: Not Reported 00:07:00.587 Active Power: Not Reported 00:07:00.587 Non-Operational Permissive Mode: Not Supported 00:07:00.587 00:07:00.587 Health Information 00:07:00.587 ================== 00:07:00.587 Critical Warnings: 00:07:00.587 Available Spare Space: OK 00:07:00.587 Temperature: OK 00:07:00.587 Device Reliability: OK 00:07:00.587 Read Only: No 00:07:00.587 Volatile Memory Backup: OK 00:07:00.587 Current Temperature: 323 Kelvin (50 Celsius) 00:07:00.587 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:00.587 Available Spare: 0% 00:07:00.587 Available Spare Threshold: 0% 00:07:00.587 Life Percentage Used: 0% 00:07:00.587 Data Units Read: 2219 00:07:00.587 Data Units Written: 2006 00:07:00.587 Host Read Commands: 122516 00:07:00.587 Host Write Commands: 120786 00:07:00.587 Controller Busy Time: 0 minutes 00:07:00.587 Power Cycles: 0 00:07:00.587 Power On Hours: 0 hours 00:07:00.587 Unsafe Shutdowns: 0 00:07:00.587 Unrecoverable Media Errors: 0 00:07:00.587 Lifetime Error Log Entries: 0 00:07:00.587 Warning Temperature Time: 0 minutes 00:07:00.587 Critical Temperature Time: 0 minutes 00:07:00.587 00:07:00.587 Number of Queues 00:07:00.587 ================ 00:07:00.587 Number of I/O Submission Queues: 64 00:07:00.587 Number of I/O Completion Queues: 64 00:07:00.587 00:07:00.587 ZNS Specific Controller Data 00:07:00.587 ============================ 00:07:00.587 Zone Append Size Limit: 0 00:07:00.587 00:07:00.587 00:07:00.587 Active Namespaces 00:07:00.587 ================= 00:07:00.587 Namespace ID:1 00:07:00.587 Error Recovery Timeout: Unlimited 00:07:00.587 Command Set Identifier: NVM (00h) 00:07:00.587 Deallocate: Supported 00:07:00.587 Deallocated/Unwritten Error: Supported 00:07:00.587 Deallocated Read Value: All 0x00 00:07:00.587 Deallocate in Write Zeroes: Not Supported 00:07:00.587 Deallocated Guard Field: 0xFFFF 00:07:00.587 Flush: Supported 00:07:00.587 Reservation: Not Supported 00:07:00.587 Namespace Sharing Capabilities: Private 00:07:00.587 Size (in LBAs): 1048576 (4GiB) 00:07:00.587 Capacity (in LBAs): 1048576 (4GiB) 00:07:00.587 Utilization (in LBAs): 1048576 (4GiB) 00:07:00.587 Thin Provisioning: Not Supported 00:07:00.587 Per-NS Atomic Units: No 00:07:00.587 Maximum Single Source Range Length: 128 00:07:00.587 Maximum Copy Length: 128 00:07:00.587 Maximum Source Range Count: 128 00:07:00.587 NGUID/EUI64 Never Reused: No 00:07:00.587 Namespace Write Protected: No 00:07:00.587 Number of LBA Formats: 8 00:07:00.587 Current LBA Format: LBA Format #04 00:07:00.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:00.587 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:00.587 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:00.587 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:00.587 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:00.587 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:00.587 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:00.587 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:00.587 00:07:00.587 NVM Specific Namespace Data 00:07:00.587 =========================== 00:07:00.587 Logical Block Storage Tag Mask: 0 00:07:00.587 Protection Information Capabilities: 00:07:00.587 16b Guard Protection Information Storage Tag Support: No 00:07:00.587 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:00.587 Storage Tag Check Read Support: No 00:07:00.587 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Namespace ID:2 00:07:00.587 Error Recovery Timeout: Unlimited 00:07:00.587 Command Set Identifier: NVM (00h) 00:07:00.587 Deallocate: Supported 00:07:00.587 Deallocated/Unwritten Error: Supported 00:07:00.587 Deallocated Read Value: All 0x00 00:07:00.587 Deallocate in Write Zeroes: Not Supported 00:07:00.587 Deallocated Guard Field: 0xFFFF 00:07:00.587 Flush: Supported 00:07:00.587 Reservation: Not Supported 00:07:00.587 Namespace Sharing Capabilities: Private 00:07:00.587 Size (in LBAs): 1048576 (4GiB) 00:07:00.587 Capacity (in LBAs): 1048576 (4GiB) 00:07:00.587 Utilization (in LBAs): 1048576 (4GiB) 00:07:00.587 Thin Provisioning: Not Supported 00:07:00.587 Per-NS Atomic Units: No 00:07:00.587 Maximum Single Source Range Length: 128 00:07:00.587 Maximum Copy Length: 128 00:07:00.587 Maximum Source Range Count: 128 00:07:00.587 NGUID/EUI64 Never Reused: No 00:07:00.587 Namespace Write Protected: No 00:07:00.587 Number of LBA Formats: 8 00:07:00.587 Current LBA Format: LBA Format #04 00:07:00.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:00.587 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:00.587 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:00.587 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:00.587 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:00.587 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:00.587 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:00.587 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:00.587 00:07:00.587 NVM Specific Namespace Data 00:07:00.587 =========================== 00:07:00.587 Logical Block Storage Tag Mask: 0 00:07:00.587 Protection Information Capabilities: 00:07:00.587 16b Guard Protection Information Storage Tag Support: No 00:07:00.587 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:00.587 Storage Tag Check Read Support: No 00:07:00.587 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.587 Namespace ID:3 00:07:00.587 Error Recovery Timeout: Unlimited 00:07:00.587 Command Set Identifier: NVM (00h) 00:07:00.587 Deallocate: Supported 00:07:00.587 Deallocated/Unwritten Error: Supported 00:07:00.587 Deallocated Read Value: All 0x00 00:07:00.587 Deallocate in Write Zeroes: Not Supported 00:07:00.587 Deallocated Guard Field: 0xFFFF 00:07:00.587 Flush: Supported 00:07:00.587 Reservation: Not Supported 00:07:00.587 Namespace Sharing Capabilities: Private 00:07:00.587 Size (in LBAs): 1048576 (4GiB) 00:07:00.587 Capacity (in LBAs): 1048576 (4GiB) 00:07:00.587 Utilization (in LBAs): 1048576 (4GiB) 00:07:00.587 Thin Provisioning: Not Supported 00:07:00.587 Per-NS Atomic Units: No 00:07:00.587 Maximum Single Source Range Length: 128 00:07:00.587 Maximum Copy Length: 128 00:07:00.587 Maximum Source Range Count: 128 00:07:00.588 NGUID/EUI64 Never Reused: No 00:07:00.588 Namespace Write Protected: No 00:07:00.588 Number of LBA Formats: 8 00:07:00.588 Current LBA Format: LBA Format #04 00:07:00.588 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:00.588 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:00.588 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:00.588 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:00.588 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:00.588 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:00.588 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:00.588 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:00.588 00:07:00.588 NVM Specific Namespace Data 00:07:00.588 =========================== 00:07:00.588 Logical Block Storage Tag Mask: 0 00:07:00.588 Protection Information Capabilities: 00:07:00.588 16b Guard Protection Information Storage Tag Support: No 00:07:00.588 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:00.588 Storage Tag Check Read Support: No 00:07:00.588 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.588 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.588 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.588 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.588 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.588 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.588 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.588 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.588 23:44:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:00.588 23:44:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:00.847 ===================================================== 00:07:00.847 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:00.847 ===================================================== 00:07:00.847 Controller Capabilities/Features 00:07:00.847 ================================ 00:07:00.847 Vendor ID: 1b36 00:07:00.847 Subsystem Vendor ID: 1af4 00:07:00.847 Serial Number: 12343 00:07:00.847 Model Number: QEMU NVMe Ctrl 00:07:00.847 Firmware Version: 8.0.0 00:07:00.847 Recommended Arb Burst: 6 00:07:00.847 IEEE OUI Identifier: 00 54 52 00:07:00.847 Multi-path I/O 00:07:00.847 May have multiple subsystem ports: No 00:07:00.847 May have multiple controllers: Yes 00:07:00.847 Associated with SR-IOV VF: No 00:07:00.847 Max Data Transfer Size: 524288 00:07:00.847 Max Number of Namespaces: 256 00:07:00.848 Max Number of I/O Queues: 64 00:07:00.848 NVMe Specification Version (VS): 1.4 00:07:00.848 NVMe Specification Version (Identify): 1.4 00:07:00.848 Maximum Queue Entries: 2048 00:07:00.848 Contiguous Queues Required: Yes 00:07:00.848 Arbitration Mechanisms Supported 00:07:00.848 Weighted Round Robin: Not Supported 00:07:00.848 Vendor Specific: Not Supported 00:07:00.848 Reset Timeout: 7500 ms 00:07:00.848 Doorbell Stride: 4 bytes 00:07:00.848 NVM Subsystem Reset: Not Supported 00:07:00.848 Command Sets Supported 00:07:00.848 NVM Command Set: Supported 00:07:00.848 Boot Partition: Not Supported 00:07:00.848 Memory Page Size Minimum: 4096 bytes 00:07:00.848 Memory Page Size Maximum: 65536 bytes 00:07:00.848 Persistent Memory Region: Not Supported 00:07:00.848 Optional Asynchronous Events Supported 00:07:00.848 Namespace Attribute Notices: Supported 00:07:00.848 Firmware Activation Notices: Not Supported 00:07:00.848 ANA Change Notices: Not Supported 00:07:00.848 PLE Aggregate Log Change Notices: Not Supported 00:07:00.848 LBA Status Info Alert Notices: Not Supported 00:07:00.848 EGE Aggregate Log Change Notices: Not Supported 00:07:00.848 Normal NVM Subsystem Shutdown event: Not Supported 00:07:00.848 Zone Descriptor Change Notices: Not Supported 00:07:00.848 Discovery Log Change Notices: Not Supported 00:07:00.848 Controller Attributes 00:07:00.848 128-bit Host Identifier: Not Supported 00:07:00.848 Non-Operational Permissive Mode: Not Supported 00:07:00.848 NVM Sets: Not Supported 00:07:00.848 Read Recovery Levels: Not Supported 00:07:00.848 Endurance Groups: Supported 00:07:00.848 Predictable Latency Mode: Not Supported 00:07:00.848 Traffic Based Keep ALive: Not Supported 00:07:00.848 Namespace Granularity: Not Supported 00:07:00.848 SQ Associations: Not Supported 00:07:00.848 UUID List: Not Supported 00:07:00.848 Multi-Domain Subsystem: Not Supported 00:07:00.848 Fixed Capacity Management: Not Supported 00:07:00.848 Variable Capacity Management: Not Supported 00:07:00.848 Delete Endurance Group: Not Supported 00:07:00.848 Delete NVM Set: Not Supported 00:07:00.848 Extended LBA Formats Supported: Supported 00:07:00.848 Flexible Data Placement Supported: Supported 00:07:00.848 00:07:00.848 Controller Memory Buffer Support 00:07:00.848 ================================ 00:07:00.848 Supported: No 00:07:00.848 00:07:00.848 Persistent Memory Region Support 00:07:00.848 ================================ 00:07:00.848 Supported: No 00:07:00.848 00:07:00.848 Admin Command Set Attributes 00:07:00.848 ============================ 00:07:00.848 Security Send/Receive: Not Supported 00:07:00.848 Format NVM: Supported 00:07:00.848 Firmware Activate/Download: Not Supported 00:07:00.848 Namespace Management: Supported 00:07:00.848 Device Self-Test: Not Supported 00:07:00.848 Directives: Supported 00:07:00.848 NVMe-MI: Not Supported 00:07:00.848 Virtualization Management: Not Supported 00:07:00.848 Doorbell Buffer Config: Supported 00:07:00.848 Get LBA Status Capability: Not Supported 00:07:00.848 Command & Feature Lockdown Capability: Not Supported 00:07:00.848 Abort Command Limit: 4 00:07:00.848 Async Event Request Limit: 4 00:07:00.848 Number of Firmware Slots: N/A 00:07:00.848 Firmware Slot 1 Read-Only: N/A 00:07:00.848 Firmware Activation Without Reset: N/A 00:07:00.848 Multiple Update Detection Support: N/A 00:07:00.848 Firmware Update Granularity: No Information Provided 00:07:00.848 Per-Namespace SMART Log: Yes 00:07:00.848 Asymmetric Namespace Access Log Page: Not Supported 00:07:00.848 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:00.848 Command Effects Log Page: Supported 00:07:00.848 Get Log Page Extended Data: Supported 00:07:00.848 Telemetry Log Pages: Not Supported 00:07:00.848 Persistent Event Log Pages: Not Supported 00:07:00.848 Supported Log Pages Log Page: May Support 00:07:00.848 Commands Supported & Effects Log Page: Not Supported 00:07:00.848 Feature Identifiers & Effects Log Page:May Support 00:07:00.848 NVMe-MI Commands & Effects Log Page: May Support 00:07:00.848 Data Area 4 for Telemetry Log: Not Supported 00:07:00.848 Error Log Page Entries Supported: 1 00:07:00.848 Keep Alive: Not Supported 00:07:00.848 00:07:00.848 NVM Command Set Attributes 00:07:00.848 ========================== 00:07:00.848 Submission Queue Entry Size 00:07:00.848 Max: 64 00:07:00.848 Min: 64 00:07:00.848 Completion Queue Entry Size 00:07:00.848 Max: 16 00:07:00.848 Min: 16 00:07:00.848 Number of Namespaces: 256 00:07:00.848 Compare Command: Supported 00:07:00.848 Write Uncorrectable Command: Not Supported 00:07:00.848 Dataset Management Command: Supported 00:07:00.848 Write Zeroes Command: Supported 00:07:00.848 Set Features Save Field: Supported 00:07:00.848 Reservations: Not Supported 00:07:00.848 Timestamp: Supported 00:07:00.848 Copy: Supported 00:07:00.848 Volatile Write Cache: Present 00:07:00.848 Atomic Write Unit (Normal): 1 00:07:00.848 Atomic Write Unit (PFail): 1 00:07:00.848 Atomic Compare & Write Unit: 1 00:07:00.848 Fused Compare & Write: Not Supported 00:07:00.848 Scatter-Gather List 00:07:00.848 SGL Command Set: Supported 00:07:00.848 SGL Keyed: Not Supported 00:07:00.848 SGL Bit Bucket Descriptor: Not Supported 00:07:00.848 SGL Metadata Pointer: Not Supported 00:07:00.848 Oversized SGL: Not Supported 00:07:00.848 SGL Metadata Address: Not Supported 00:07:00.848 SGL Offset: Not Supported 00:07:00.848 Transport SGL Data Block: Not Supported 00:07:00.848 Replay Protected Memory Block: Not Supported 00:07:00.848 00:07:00.848 Firmware Slot Information 00:07:00.848 ========================= 00:07:00.848 Active slot: 1 00:07:00.848 Slot 1 Firmware Revision: 1.0 00:07:00.848 00:07:00.848 00:07:00.848 Commands Supported and Effects 00:07:00.848 ============================== 00:07:00.848 Admin Commands 00:07:00.848 -------------- 00:07:00.848 Delete I/O Submission Queue (00h): Supported 00:07:00.848 Create I/O Submission Queue (01h): Supported 00:07:00.848 Get Log Page (02h): Supported 00:07:00.848 Delete I/O Completion Queue (04h): Supported 00:07:00.848 Create I/O Completion Queue (05h): Supported 00:07:00.848 Identify (06h): Supported 00:07:00.848 Abort (08h): Supported 00:07:00.848 Set Features (09h): Supported 00:07:00.848 Get Features (0Ah): Supported 00:07:00.848 Asynchronous Event Request (0Ch): Supported 00:07:00.848 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:00.848 Directive Send (19h): Supported 00:07:00.848 Directive Receive (1Ah): Supported 00:07:00.848 Virtualization Management (1Ch): Supported 00:07:00.848 Doorbell Buffer Config (7Ch): Supported 00:07:00.848 Format NVM (80h): Supported LBA-Change 00:07:00.848 I/O Commands 00:07:00.848 ------------ 00:07:00.848 Flush (00h): Supported LBA-Change 00:07:00.848 Write (01h): Supported LBA-Change 00:07:00.848 Read (02h): Supported 00:07:00.848 Compare (05h): Supported 00:07:00.848 Write Zeroes (08h): Supported LBA-Change 00:07:00.848 Dataset Management (09h): Supported LBA-Change 00:07:00.848 Unknown (0Ch): Supported 00:07:00.848 Unknown (12h): Supported 00:07:00.848 Copy (19h): Supported LBA-Change 00:07:00.848 Unknown (1Dh): Supported LBA-Change 00:07:00.848 00:07:00.848 Error Log 00:07:00.848 ========= 00:07:00.848 00:07:00.848 Arbitration 00:07:00.848 =========== 00:07:00.848 Arbitration Burst: no limit 00:07:00.848 00:07:00.848 Power Management 00:07:00.848 ================ 00:07:00.848 Number of Power States: 1 00:07:00.848 Current Power State: Power State #0 00:07:00.848 Power State #0: 00:07:00.848 Max Power: 25.00 W 00:07:00.848 Non-Operational State: Operational 00:07:00.848 Entry Latency: 16 microseconds 00:07:00.848 Exit Latency: 4 microseconds 00:07:00.848 Relative Read Throughput: 0 00:07:00.848 Relative Read Latency: 0 00:07:00.848 Relative Write Throughput: 0 00:07:00.848 Relative Write Latency: 0 00:07:00.848 Idle Power: Not Reported 00:07:00.848 Active Power: Not Reported 00:07:00.848 Non-Operational Permissive Mode: Not Supported 00:07:00.848 00:07:00.848 Health Information 00:07:00.848 ================== 00:07:00.848 Critical Warnings: 00:07:00.848 Available Spare Space: OK 00:07:00.848 Temperature: OK 00:07:00.848 Device Reliability: OK 00:07:00.848 Read Only: No 00:07:00.848 Volatile Memory Backup: OK 00:07:00.848 Current Temperature: 323 Kelvin (50 Celsius) 00:07:00.848 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:00.848 Available Spare: 0% 00:07:00.849 Available Spare Threshold: 0% 00:07:00.849 Life Percentage Used: 0% 00:07:00.849 Data Units Read: 859 00:07:00.849 Data Units Written: 788 00:07:00.849 Host Read Commands: 41830 00:07:00.849 Host Write Commands: 41253 00:07:00.849 Controller Busy Time: 0 minutes 00:07:00.849 Power Cycles: 0 00:07:00.849 Power On Hours: 0 hours 00:07:00.849 Unsafe Shutdowns: 0 00:07:00.849 Unrecoverable Media Errors: 0 00:07:00.849 Lifetime Error Log Entries: 0 00:07:00.849 Warning Temperature Time: 0 minutes 00:07:00.849 Critical Temperature Time: 0 minutes 00:07:00.849 00:07:00.849 Number of Queues 00:07:00.849 ================ 00:07:00.849 Number of I/O Submission Queues: 64 00:07:00.849 Number of I/O Completion Queues: 64 00:07:00.849 00:07:00.849 ZNS Specific Controller Data 00:07:00.849 ============================ 00:07:00.849 Zone Append Size Limit: 0 00:07:00.849 00:07:00.849 00:07:00.849 Active Namespaces 00:07:00.849 ================= 00:07:00.849 Namespace ID:1 00:07:00.849 Error Recovery Timeout: Unlimited 00:07:00.849 Command Set Identifier: NVM (00h) 00:07:00.849 Deallocate: Supported 00:07:00.849 Deallocated/Unwritten Error: Supported 00:07:00.849 Deallocated Read Value: All 0x00 00:07:00.849 Deallocate in Write Zeroes: Not Supported 00:07:00.849 Deallocated Guard Field: 0xFFFF 00:07:00.849 Flush: Supported 00:07:00.849 Reservation: Not Supported 00:07:00.849 Namespace Sharing Capabilities: Multiple Controllers 00:07:00.849 Size (in LBAs): 262144 (1GiB) 00:07:00.849 Capacity (in LBAs): 262144 (1GiB) 00:07:00.849 Utilization (in LBAs): 262144 (1GiB) 00:07:00.849 Thin Provisioning: Not Supported 00:07:00.849 Per-NS Atomic Units: No 00:07:00.849 Maximum Single Source Range Length: 128 00:07:00.849 Maximum Copy Length: 128 00:07:00.849 Maximum Source Range Count: 128 00:07:00.849 NGUID/EUI64 Never Reused: No 00:07:00.849 Namespace Write Protected: No 00:07:00.849 Endurance group ID: 1 00:07:00.849 Number of LBA Formats: 8 00:07:00.849 Current LBA Format: LBA Format #04 00:07:00.849 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:00.849 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:00.849 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:00.849 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:00.849 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:00.849 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:00.849 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:00.849 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:00.849 00:07:00.849 Get Feature FDP: 00:07:00.849 ================ 00:07:00.849 Enabled: Yes 00:07:00.849 FDP configuration index: 0 00:07:00.849 00:07:00.849 FDP configurations log page 00:07:00.849 =========================== 00:07:00.849 Number of FDP configurations: 1 00:07:00.849 Version: 0 00:07:00.849 Size: 112 00:07:00.849 FDP Configuration Descriptor: 0 00:07:00.849 Descriptor Size: 96 00:07:00.849 Reclaim Group Identifier format: 2 00:07:00.849 FDP Volatile Write Cache: Not Present 00:07:00.849 FDP Configuration: Valid 00:07:00.849 Vendor Specific Size: 0 00:07:00.849 Number of Reclaim Groups: 2 00:07:00.849 Number of Recalim Unit Handles: 8 00:07:00.849 Max Placement Identifiers: 128 00:07:00.849 Number of Namespaces Suppprted: 256 00:07:00.849 Reclaim unit Nominal Size: 6000000 bytes 00:07:00.849 Estimated Reclaim Unit Time Limit: Not Reported 00:07:00.849 RUH Desc #000: RUH Type: Initially Isolated 00:07:00.849 RUH Desc #001: RUH Type: Initially Isolated 00:07:00.849 RUH Desc #002: RUH Type: Initially Isolated 00:07:00.849 RUH Desc #003: RUH Type: Initially Isolated 00:07:00.849 RUH Desc #004: RUH Type: Initially Isolated 00:07:00.849 RUH Desc #005: RUH Type: Initially Isolated 00:07:00.849 RUH Desc #006: RUH Type: Initially Isolated 00:07:00.849 RUH Desc #007: RUH Type: Initially Isolated 00:07:00.849 00:07:00.849 FDP reclaim unit handle usage log page 00:07:00.849 ====================================== 00:07:00.849 Number of Reclaim Unit Handles: 8 00:07:00.849 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:00.849 RUH Usage Desc #001: RUH Attributes: Unused 00:07:00.849 RUH Usage Desc #002: RUH Attributes: Unused 00:07:00.849 RUH Usage Desc #003: RUH Attributes: Unused 00:07:00.849 RUH Usage Desc #004: RUH Attributes: Unused 00:07:00.849 RUH Usage Desc #005: RUH Attributes: Unused 00:07:00.849 RUH Usage Desc #006: RUH Attributes: Unused 00:07:00.849 RUH Usage Desc #007: RUH Attributes: Unused 00:07:00.849 00:07:00.849 FDP statistics log page 00:07:00.849 ======================= 00:07:00.849 Host bytes with metadata written: 505323520 00:07:00.849 Media bytes with metadata written: 505380864 00:07:00.849 Media bytes erased: 0 00:07:00.849 00:07:00.849 FDP events log page 00:07:00.849 =================== 00:07:00.849 Number of FDP events: 0 00:07:00.849 00:07:00.849 NVM Specific Namespace Data 00:07:00.849 =========================== 00:07:00.849 Logical Block Storage Tag Mask: 0 00:07:00.849 Protection Information Capabilities: 00:07:00.849 16b Guard Protection Information Storage Tag Support: No 00:07:00.849 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:00.849 Storage Tag Check Read Support: No 00:07:00.849 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.849 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.849 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.849 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.849 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.849 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.849 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.849 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:00.849 00:07:00.849 real 0m1.267s 00:07:00.849 user 0m0.461s 00:07:00.849 sys 0m0.563s 00:07:00.849 23:44:33 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.849 23:44:33 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:00.849 ************************************ 00:07:00.849 END TEST nvme_identify 00:07:00.849 ************************************ 00:07:00.849 23:44:33 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:00.849 23:44:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.849 23:44:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.849 23:44:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.849 ************************************ 00:07:00.849 START TEST nvme_perf 00:07:00.849 ************************************ 00:07:00.849 23:44:33 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:00.849 23:44:33 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:02.226 Initializing NVMe Controllers 00:07:02.226 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:02.226 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:02.226 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:02.226 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:02.226 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:02.226 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:02.226 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:02.226 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:02.226 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:02.226 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:02.226 Initialization complete. Launching workers. 00:07:02.226 ======================================================== 00:07:02.226 Latency(us) 00:07:02.226 Device Information : IOPS MiB/s Average min max 00:07:02.226 PCIE (0000:00:13.0) NSID 1 from core 0: 8572.73 100.46 14955.62 10477.02 38112.04 00:07:02.226 PCIE (0000:00:10.0) NSID 1 from core 0: 8572.73 100.46 14934.53 10328.05 36806.77 00:07:02.226 PCIE (0000:00:11.0) NSID 1 from core 0: 8572.73 100.46 14912.39 10154.20 35502.59 00:07:02.226 PCIE (0000:00:12.0) NSID 1 from core 0: 8572.73 100.46 14889.13 9332.67 34841.62 00:07:02.226 PCIE (0000:00:12.0) NSID 2 from core 0: 8572.73 100.46 14865.75 8971.30 33378.11 00:07:02.226 PCIE (0000:00:12.0) NSID 3 from core 0: 8636.71 101.21 14732.55 8868.11 25649.41 00:07:02.226 ======================================================== 00:07:02.226 Total : 51500.38 603.52 14881.48 8868.11 38112.04 00:07:02.226 00:07:02.226 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:02.226 ================================================================================= 00:07:02.226 1.00000% : 11090.708us 00:07:02.226 10.00000% : 12552.665us 00:07:02.226 25.00000% : 13409.674us 00:07:02.226 50.00000% : 14317.095us 00:07:02.226 75.00000% : 15930.289us 00:07:02.226 90.00000% : 18047.606us 00:07:02.226 95.00000% : 18955.028us 00:07:02.226 98.00000% : 19963.274us 00:07:02.226 99.00000% : 30852.332us 00:07:02.226 99.50000% : 37103.458us 00:07:02.226 99.90000% : 37910.055us 00:07:02.226 99.99000% : 38313.354us 00:07:02.226 99.99900% : 38313.354us 00:07:02.226 99.99990% : 38313.354us 00:07:02.226 99.99999% : 38313.354us 00:07:02.226 00:07:02.226 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:02.226 ================================================================================= 00:07:02.226 1.00000% : 11141.120us 00:07:02.226 10.00000% : 12552.665us 00:07:02.226 25.00000% : 13409.674us 00:07:02.226 50.00000% : 14317.095us 00:07:02.226 75.00000% : 16031.114us 00:07:02.226 90.00000% : 18148.431us 00:07:02.226 95.00000% : 18854.203us 00:07:02.226 98.00000% : 19559.975us 00:07:02.226 99.00000% : 29440.788us 00:07:02.226 99.50000% : 35893.563us 00:07:02.226 99.90000% : 36700.160us 00:07:02.226 99.99000% : 36901.809us 00:07:02.226 99.99900% : 36901.809us 00:07:02.226 99.99990% : 36901.809us 00:07:02.226 99.99999% : 36901.809us 00:07:02.226 00:07:02.226 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:02.226 ================================================================================= 00:07:02.226 1.00000% : 11141.120us 00:07:02.226 10.00000% : 12552.665us 00:07:02.226 25.00000% : 13409.674us 00:07:02.226 50.00000% : 14216.271us 00:07:02.226 75.00000% : 16131.938us 00:07:02.226 90.00000% : 18047.606us 00:07:02.226 95.00000% : 18955.028us 00:07:02.226 98.00000% : 20064.098us 00:07:02.226 99.00000% : 27827.594us 00:07:02.226 99.50000% : 34683.668us 00:07:02.226 99.90000% : 35490.265us 00:07:02.226 99.99000% : 35691.914us 00:07:02.226 99.99900% : 35691.914us 00:07:02.226 99.99990% : 35691.914us 00:07:02.226 99.99999% : 35691.914us 00:07:02.226 00:07:02.226 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:02.226 ================================================================================= 00:07:02.226 1.00000% : 10737.822us 00:07:02.226 10.00000% : 12451.840us 00:07:02.226 25.00000% : 13409.674us 00:07:02.226 50.00000% : 14216.271us 00:07:02.226 75.00000% : 15930.289us 00:07:02.226 90.00000% : 18047.606us 00:07:02.226 95.00000% : 19055.852us 00:07:02.226 98.00000% : 20568.222us 00:07:02.226 99.00000% : 26819.348us 00:07:02.226 99.50000% : 33877.071us 00:07:02.226 99.90000% : 34683.668us 00:07:02.226 99.99000% : 34885.317us 00:07:02.226 99.99900% : 34885.317us 00:07:02.226 99.99990% : 34885.317us 00:07:02.226 99.99999% : 34885.317us 00:07:02.226 00:07:02.226 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:02.226 ================================================================================= 00:07:02.226 1.00000% : 10435.348us 00:07:02.226 10.00000% : 12603.077us 00:07:02.226 25.00000% : 13409.674us 00:07:02.226 50.00000% : 14216.271us 00:07:02.226 75.00000% : 16031.114us 00:07:02.226 90.00000% : 18148.431us 00:07:02.226 95.00000% : 19156.677us 00:07:02.226 98.00000% : 20265.748us 00:07:02.226 99.00000% : 25206.154us 00:07:02.226 99.50000% : 32465.526us 00:07:02.226 99.90000% : 33272.123us 00:07:02.226 99.99000% : 33473.772us 00:07:02.226 99.99900% : 33473.772us 00:07:02.226 99.99990% : 33473.772us 00:07:02.226 99.99999% : 33473.772us 00:07:02.227 00:07:02.227 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:02.227 ================================================================================= 00:07:02.227 1.00000% : 10536.172us 00:07:02.227 10.00000% : 12502.252us 00:07:02.227 25.00000% : 13409.674us 00:07:02.227 50.00000% : 14216.271us 00:07:02.227 75.00000% : 15930.289us 00:07:02.227 90.00000% : 17946.782us 00:07:02.227 95.00000% : 19055.852us 00:07:02.227 98.00000% : 19862.449us 00:07:02.227 99.00000% : 20164.923us 00:07:02.227 99.50000% : 24702.031us 00:07:02.227 99.90000% : 25508.628us 00:07:02.227 99.99000% : 25710.277us 00:07:02.227 99.99900% : 25710.277us 00:07:02.227 99.99990% : 25710.277us 00:07:02.227 99.99999% : 25710.277us 00:07:02.227 00:07:02.227 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:02.227 ============================================================================== 00:07:02.227 Range in us Cumulative IO count 00:07:02.227 10435.348 - 10485.760: 0.0117% ( 1) 00:07:02.227 10485.760 - 10536.172: 0.0350% ( 2) 00:07:02.227 10536.172 - 10586.585: 0.1049% ( 6) 00:07:02.227 10586.585 - 10636.997: 0.1632% ( 5) 00:07:02.227 10636.997 - 10687.409: 0.2449% ( 7) 00:07:02.227 10687.409 - 10737.822: 0.3148% ( 6) 00:07:02.227 10737.822 - 10788.234: 0.4081% ( 8) 00:07:02.227 10788.234 - 10838.646: 0.4664% ( 5) 00:07:02.227 10838.646 - 10889.058: 0.5364% ( 6) 00:07:02.227 10889.058 - 10939.471: 0.7113% ( 15) 00:07:02.227 10939.471 - 10989.883: 0.7929% ( 7) 00:07:02.227 10989.883 - 11040.295: 0.9095% ( 10) 00:07:02.227 11040.295 - 11090.708: 1.0261% ( 10) 00:07:02.227 11090.708 - 11141.120: 1.1660% ( 12) 00:07:02.227 11141.120 - 11191.532: 1.3060% ( 12) 00:07:02.227 11191.532 - 11241.945: 1.4576% ( 13) 00:07:02.227 11241.945 - 11292.357: 1.6441% ( 16) 00:07:02.227 11292.357 - 11342.769: 1.8307% ( 16) 00:07:02.227 11342.769 - 11393.182: 2.0639% ( 20) 00:07:02.227 11393.182 - 11443.594: 2.2971% ( 20) 00:07:02.227 11443.594 - 11494.006: 2.5653% ( 23) 00:07:02.227 11494.006 - 11544.418: 2.7752% ( 18) 00:07:02.227 11544.418 - 11594.831: 3.0201% ( 21) 00:07:02.227 11594.831 - 11645.243: 3.1833% ( 14) 00:07:02.227 11645.243 - 11695.655: 3.4282% ( 21) 00:07:02.227 11695.655 - 11746.068: 3.6497% ( 19) 00:07:02.227 11746.068 - 11796.480: 3.9179% ( 23) 00:07:02.227 11796.480 - 11846.892: 4.2327% ( 27) 00:07:02.227 11846.892 - 11897.305: 4.5009% ( 23) 00:07:02.227 11897.305 - 11947.717: 4.8507% ( 30) 00:07:02.227 11947.717 - 11998.129: 5.2589% ( 35) 00:07:02.227 11998.129 - 12048.542: 5.5970% ( 29) 00:07:02.227 12048.542 - 12098.954: 5.9468% ( 30) 00:07:02.227 12098.954 - 12149.366: 6.2733% ( 28) 00:07:02.227 12149.366 - 12199.778: 6.6581% ( 33) 00:07:02.227 12199.778 - 12250.191: 7.0779% ( 36) 00:07:02.227 12250.191 - 12300.603: 7.4743% ( 34) 00:07:02.227 12300.603 - 12351.015: 7.9291% ( 39) 00:07:02.227 12351.015 - 12401.428: 8.4188% ( 42) 00:07:02.227 12401.428 - 12451.840: 8.9202% ( 43) 00:07:02.227 12451.840 - 12502.252: 9.4333% ( 44) 00:07:02.227 12502.252 - 12552.665: 10.0396% ( 52) 00:07:02.227 12552.665 - 12603.077: 10.7626% ( 62) 00:07:02.227 12603.077 - 12653.489: 11.4739% ( 61) 00:07:02.227 12653.489 - 12703.902: 12.2435% ( 66) 00:07:02.227 12703.902 - 12754.314: 13.0247% ( 67) 00:07:02.227 12754.314 - 12804.726: 13.8526% ( 71) 00:07:02.227 12804.726 - 12855.138: 14.5872% ( 63) 00:07:02.227 12855.138 - 12905.551: 15.4384% ( 73) 00:07:02.227 12905.551 - 13006.375: 17.2225% ( 153) 00:07:02.227 13006.375 - 13107.200: 19.1348% ( 164) 00:07:02.227 13107.200 - 13208.025: 21.2687% ( 183) 00:07:02.227 13208.025 - 13308.849: 23.6590% ( 205) 00:07:02.227 13308.849 - 13409.674: 26.1660% ( 215) 00:07:02.227 13409.674 - 13510.498: 28.8829% ( 233) 00:07:02.227 13510.498 - 13611.323: 31.9496% ( 263) 00:07:02.227 13611.323 - 13712.148: 34.9697% ( 259) 00:07:02.227 13712.148 - 13812.972: 37.7799% ( 241) 00:07:02.227 13812.972 - 13913.797: 40.7533% ( 255) 00:07:02.227 13913.797 - 14014.622: 43.5401% ( 239) 00:07:02.227 14014.622 - 14115.446: 46.5485% ( 258) 00:07:02.227 14115.446 - 14216.271: 49.4869% ( 252) 00:07:02.227 14216.271 - 14317.095: 52.2971% ( 241) 00:07:02.227 14317.095 - 14417.920: 54.8274% ( 217) 00:07:02.227 14417.920 - 14518.745: 57.0896% ( 194) 00:07:02.227 14518.745 - 14619.569: 59.0485% ( 168) 00:07:02.227 14619.569 - 14720.394: 60.9841% ( 166) 00:07:02.227 14720.394 - 14821.218: 62.7915% ( 155) 00:07:02.227 14821.218 - 14922.043: 64.2141% ( 122) 00:07:02.227 14922.043 - 15022.868: 65.5201% ( 112) 00:07:02.227 15022.868 - 15123.692: 66.7094% ( 102) 00:07:02.227 15123.692 - 15224.517: 67.7938% ( 93) 00:07:02.227 15224.517 - 15325.342: 68.7733% ( 84) 00:07:02.227 15325.342 - 15426.166: 69.9860% ( 104) 00:07:02.227 15426.166 - 15526.991: 71.0005% ( 87) 00:07:02.227 15526.991 - 15627.815: 72.0149% ( 87) 00:07:02.227 15627.815 - 15728.640: 73.1693% ( 99) 00:07:02.227 15728.640 - 15829.465: 74.1954% ( 88) 00:07:02.227 15829.465 - 15930.289: 75.1516% ( 82) 00:07:02.227 15930.289 - 16031.114: 76.0611% ( 78) 00:07:02.227 16031.114 - 16131.938: 77.0056% ( 81) 00:07:02.227 16131.938 - 16232.763: 78.0784% ( 92) 00:07:02.227 16232.763 - 16333.588: 79.0812% ( 86) 00:07:02.227 16333.588 - 16434.412: 79.8974% ( 70) 00:07:02.227 16434.412 - 16535.237: 80.6670% ( 66) 00:07:02.227 16535.237 - 16636.062: 81.4249% ( 65) 00:07:02.227 16636.062 - 16736.886: 81.9846% ( 48) 00:07:02.227 16736.886 - 16837.711: 82.4860% ( 43) 00:07:02.227 16837.711 - 16938.535: 83.0340% ( 47) 00:07:02.227 16938.535 - 17039.360: 83.5354% ( 43) 00:07:02.227 17039.360 - 17140.185: 84.1185% ( 50) 00:07:02.227 17140.185 - 17241.009: 84.5965% ( 41) 00:07:02.227 17241.009 - 17341.834: 85.1679% ( 49) 00:07:02.227 17341.834 - 17442.658: 85.7160% ( 47) 00:07:02.227 17442.658 - 17543.483: 86.4506% ( 63) 00:07:02.227 17543.483 - 17644.308: 87.2668% ( 70) 00:07:02.227 17644.308 - 17745.132: 88.1763% ( 78) 00:07:02.227 17745.132 - 17845.957: 89.0392% ( 74) 00:07:02.227 17845.957 - 17946.782: 89.7738% ( 63) 00:07:02.227 17946.782 - 18047.606: 90.4151% ( 55) 00:07:02.227 18047.606 - 18148.431: 91.0798% ( 57) 00:07:02.227 18148.431 - 18249.255: 91.8144% ( 63) 00:07:02.227 18249.255 - 18350.080: 92.4907% ( 58) 00:07:02.228 18350.080 - 18450.905: 93.0737% ( 50) 00:07:02.228 18450.905 - 18551.729: 93.5868% ( 44) 00:07:02.228 18551.729 - 18652.554: 94.1115% ( 45) 00:07:02.228 18652.554 - 18753.378: 94.4496% ( 29) 00:07:02.228 18753.378 - 18854.203: 94.8228% ( 32) 00:07:02.228 18854.203 - 18955.028: 95.2775% ( 39) 00:07:02.228 18955.028 - 19055.852: 95.6273% ( 30) 00:07:02.228 19055.852 - 19156.677: 95.9771% ( 30) 00:07:02.228 19156.677 - 19257.502: 96.3270% ( 30) 00:07:02.228 19257.502 - 19358.326: 96.7351% ( 35) 00:07:02.228 19358.326 - 19459.151: 97.0616% ( 28) 00:07:02.228 19459.151 - 19559.975: 97.3414% ( 24) 00:07:02.228 19559.975 - 19660.800: 97.5746% ( 20) 00:07:02.228 19660.800 - 19761.625: 97.7962% ( 19) 00:07:02.228 19761.625 - 19862.449: 97.9944% ( 17) 00:07:02.228 19862.449 - 19963.274: 98.1693% ( 15) 00:07:02.228 19963.274 - 20064.098: 98.3442% ( 15) 00:07:02.228 20064.098 - 20164.923: 98.4841% ( 12) 00:07:02.228 20164.923 - 20265.748: 98.5075% ( 2) 00:07:02.228 29642.437 - 29844.086: 98.5424% ( 3) 00:07:02.228 29844.086 - 30045.735: 98.6474% ( 9) 00:07:02.228 30045.735 - 30247.385: 98.7407% ( 8) 00:07:02.228 30247.385 - 30449.034: 98.8456% ( 9) 00:07:02.228 30449.034 - 30650.683: 98.9506% ( 9) 00:07:02.228 30650.683 - 30852.332: 99.0438% ( 8) 00:07:02.228 30852.332 - 31053.982: 99.1371% ( 8) 00:07:02.228 31053.982 - 31255.631: 99.2421% ( 9) 00:07:02.228 31255.631 - 31457.280: 99.2537% ( 1) 00:07:02.228 36498.511 - 36700.160: 99.3004% ( 4) 00:07:02.228 36700.160 - 36901.809: 99.4053% ( 9) 00:07:02.228 36901.809 - 37103.458: 99.5103% ( 9) 00:07:02.228 37103.458 - 37305.108: 99.6035% ( 8) 00:07:02.228 37305.108 - 37506.757: 99.7085% ( 9) 00:07:02.228 37506.757 - 37708.406: 99.8134% ( 9) 00:07:02.228 37708.406 - 37910.055: 99.9067% ( 8) 00:07:02.228 37910.055 - 38111.705: 99.9883% ( 7) 00:07:02.228 38111.705 - 38313.354: 100.0000% ( 1) 00:07:02.228 00:07:02.228 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:02.228 ============================================================================== 00:07:02.228 Range in us Cumulative IO count 00:07:02.228 10284.111 - 10334.523: 0.0233% ( 2) 00:07:02.228 10334.523 - 10384.935: 0.0466% ( 2) 00:07:02.228 10384.935 - 10435.348: 0.0583% ( 1) 00:07:02.228 10435.348 - 10485.760: 0.1049% ( 4) 00:07:02.228 10485.760 - 10536.172: 0.1283% ( 2) 00:07:02.228 10536.172 - 10586.585: 0.1749% ( 4) 00:07:02.228 10586.585 - 10636.997: 0.1866% ( 1) 00:07:02.228 10636.997 - 10687.409: 0.2215% ( 3) 00:07:02.228 10687.409 - 10737.822: 0.2565% ( 3) 00:07:02.228 10737.822 - 10788.234: 0.2915% ( 3) 00:07:02.228 10788.234 - 10838.646: 0.3148% ( 2) 00:07:02.228 10838.646 - 10889.058: 0.3731% ( 5) 00:07:02.228 10889.058 - 10939.471: 0.4897% ( 10) 00:07:02.228 10939.471 - 10989.883: 0.5947% ( 9) 00:07:02.228 10989.883 - 11040.295: 0.7346% ( 12) 00:07:02.228 11040.295 - 11090.708: 0.9095% ( 15) 00:07:02.228 11090.708 - 11141.120: 1.1077% ( 17) 00:07:02.228 11141.120 - 11191.532: 1.3060% ( 17) 00:07:02.228 11191.532 - 11241.945: 1.4342% ( 11) 00:07:02.228 11241.945 - 11292.357: 1.6208% ( 16) 00:07:02.228 11292.357 - 11342.769: 1.8773% ( 22) 00:07:02.228 11342.769 - 11393.182: 2.0989% ( 19) 00:07:02.228 11393.182 - 11443.594: 2.4953% ( 34) 00:07:02.228 11443.594 - 11494.006: 2.6702% ( 15) 00:07:02.228 11494.006 - 11544.418: 2.8918% ( 19) 00:07:02.228 11544.418 - 11594.831: 3.0900% ( 17) 00:07:02.228 11594.831 - 11645.243: 3.3465% ( 22) 00:07:02.228 11645.243 - 11695.655: 3.7197% ( 32) 00:07:02.228 11695.655 - 11746.068: 4.0695% ( 30) 00:07:02.228 11746.068 - 11796.480: 4.4310% ( 31) 00:07:02.228 11796.480 - 11846.892: 4.6409% ( 18) 00:07:02.228 11846.892 - 11897.305: 4.8857% ( 21) 00:07:02.228 11897.305 - 11947.717: 5.4221% ( 46) 00:07:02.228 11947.717 - 11998.129: 5.7369% ( 27) 00:07:02.228 11998.129 - 12048.542: 5.9701% ( 20) 00:07:02.228 12048.542 - 12098.954: 6.1800% ( 18) 00:07:02.228 12098.954 - 12149.366: 6.5415% ( 31) 00:07:02.228 12149.366 - 12199.778: 7.0896% ( 47) 00:07:02.228 12199.778 - 12250.191: 7.4627% ( 32) 00:07:02.228 12250.191 - 12300.603: 7.8008% ( 29) 00:07:02.228 12300.603 - 12351.015: 8.2556% ( 39) 00:07:02.228 12351.015 - 12401.428: 8.6054% ( 30) 00:07:02.228 12401.428 - 12451.840: 9.1651% ( 48) 00:07:02.228 12451.840 - 12502.252: 9.7481% ( 50) 00:07:02.228 12502.252 - 12552.665: 10.2845% ( 46) 00:07:02.228 12552.665 - 12603.077: 10.9025% ( 53) 00:07:02.228 12603.077 - 12653.489: 11.7071% ( 69) 00:07:02.228 12653.489 - 12703.902: 12.2668% ( 48) 00:07:02.228 12703.902 - 12754.314: 13.0714% ( 69) 00:07:02.228 12754.314 - 12804.726: 13.7127% ( 55) 00:07:02.228 12804.726 - 12855.138: 14.5056% ( 68) 00:07:02.228 12855.138 - 12905.551: 15.3102% ( 69) 00:07:02.228 12905.551 - 13006.375: 17.1758% ( 160) 00:07:02.228 13006.375 - 13107.200: 19.3214% ( 184) 00:07:02.228 13107.200 - 13208.025: 21.5252% ( 189) 00:07:02.228 13208.025 - 13308.849: 23.8573% ( 200) 00:07:02.228 13308.849 - 13409.674: 26.5042% ( 227) 00:07:02.228 13409.674 - 13510.498: 29.2094% ( 232) 00:07:02.228 13510.498 - 13611.323: 32.2994% ( 265) 00:07:02.228 13611.323 - 13712.148: 35.3195% ( 259) 00:07:02.228 13712.148 - 13812.972: 38.2113% ( 248) 00:07:02.228 13812.972 - 13913.797: 41.0798% ( 246) 00:07:02.228 13913.797 - 14014.622: 43.9016% ( 242) 00:07:02.228 14014.622 - 14115.446: 46.5951% ( 231) 00:07:02.228 14115.446 - 14216.271: 49.2537% ( 228) 00:07:02.228 14216.271 - 14317.095: 51.5159% ( 194) 00:07:02.228 14317.095 - 14417.920: 53.9879% ( 212) 00:07:02.228 14417.920 - 14518.745: 56.2383% ( 193) 00:07:02.228 14518.745 - 14619.569: 58.4305% ( 188) 00:07:02.228 14619.569 - 14720.394: 60.4711% ( 175) 00:07:02.228 14720.394 - 14821.218: 62.2435% ( 152) 00:07:02.228 14821.218 - 14922.043: 63.8293% ( 136) 00:07:02.228 14922.043 - 15022.868: 65.4151% ( 136) 00:07:02.228 15022.868 - 15123.692: 67.0592% ( 141) 00:07:02.228 15123.692 - 15224.517: 68.3186% ( 108) 00:07:02.228 15224.517 - 15325.342: 69.3680% ( 90) 00:07:02.228 15325.342 - 15426.166: 70.6856% ( 113) 00:07:02.228 15426.166 - 15526.991: 71.6185% ( 80) 00:07:02.228 15526.991 - 15627.815: 72.5863% ( 83) 00:07:02.228 15627.815 - 15728.640: 73.4841% ( 77) 00:07:02.228 15728.640 - 15829.465: 74.1838% ( 60) 00:07:02.228 15829.465 - 15930.289: 74.8951% ( 61) 00:07:02.228 15930.289 - 16031.114: 75.6763% ( 67) 00:07:02.228 16031.114 - 16131.938: 76.5275% ( 73) 00:07:02.228 16131.938 - 16232.763: 77.3787% ( 73) 00:07:02.228 16232.763 - 16333.588: 78.2766% ( 77) 00:07:02.228 16333.588 - 16434.412: 79.0462% ( 66) 00:07:02.228 16434.412 - 16535.237: 79.8507% ( 69) 00:07:02.228 16535.237 - 16636.062: 80.6203% ( 66) 00:07:02.228 16636.062 - 16736.886: 81.2267% ( 52) 00:07:02.228 16736.886 - 16837.711: 81.7397% ( 44) 00:07:02.228 16837.711 - 16938.535: 82.3577% ( 53) 00:07:02.228 16938.535 - 17039.360: 83.0107% ( 56) 00:07:02.228 17039.360 - 17140.185: 83.5588% ( 47) 00:07:02.229 17140.185 - 17241.009: 84.2351% ( 58) 00:07:02.229 17241.009 - 17341.834: 84.7831% ( 47) 00:07:02.229 17341.834 - 17442.658: 85.3661% ( 50) 00:07:02.229 17442.658 - 17543.483: 86.1124% ( 64) 00:07:02.229 17543.483 - 17644.308: 86.9170% ( 69) 00:07:02.229 17644.308 - 17745.132: 87.6049% ( 59) 00:07:02.229 17745.132 - 17845.957: 88.2812% ( 58) 00:07:02.229 17845.957 - 17946.782: 89.1674% ( 76) 00:07:02.229 17946.782 - 18047.606: 89.9370% ( 66) 00:07:02.229 18047.606 - 18148.431: 90.7533% ( 70) 00:07:02.229 18148.431 - 18249.255: 91.6161% ( 74) 00:07:02.229 18249.255 - 18350.080: 92.4674% ( 73) 00:07:02.229 18350.080 - 18450.905: 93.1670% ( 60) 00:07:02.229 18450.905 - 18551.729: 93.7267% ( 48) 00:07:02.229 18551.729 - 18652.554: 94.3797% ( 56) 00:07:02.229 18652.554 - 18753.378: 94.9627% ( 50) 00:07:02.229 18753.378 - 18854.203: 95.5457% ( 50) 00:07:02.229 18854.203 - 18955.028: 95.9655% ( 36) 00:07:02.229 18955.028 - 19055.852: 96.4319% ( 40) 00:07:02.229 19055.852 - 19156.677: 97.0382% ( 52) 00:07:02.229 19156.677 - 19257.502: 97.2948% ( 22) 00:07:02.229 19257.502 - 19358.326: 97.4813% ( 16) 00:07:02.229 19358.326 - 19459.151: 97.7729% ( 25) 00:07:02.229 19459.151 - 19559.975: 98.0061% ( 20) 00:07:02.229 19559.975 - 19660.800: 98.1343% ( 11) 00:07:02.229 19660.800 - 19761.625: 98.2509% ( 10) 00:07:02.229 19761.625 - 19862.449: 98.3326% ( 7) 00:07:02.229 19862.449 - 19963.274: 98.3675% ( 3) 00:07:02.229 19963.274 - 20064.098: 98.3909% ( 2) 00:07:02.229 20064.098 - 20164.923: 98.4375% ( 4) 00:07:02.229 20164.923 - 20265.748: 98.4725% ( 3) 00:07:02.229 20265.748 - 20366.572: 98.5075% ( 3) 00:07:02.229 28230.892 - 28432.542: 98.6007% ( 8) 00:07:02.229 28432.542 - 28634.191: 98.6707% ( 6) 00:07:02.229 28634.191 - 28835.840: 98.7407% ( 6) 00:07:02.229 28835.840 - 29037.489: 98.8456% ( 9) 00:07:02.229 29037.489 - 29239.138: 98.9389% ( 8) 00:07:02.229 29239.138 - 29440.788: 99.0322% ( 8) 00:07:02.229 29440.788 - 29642.437: 99.1138% ( 7) 00:07:02.229 29642.437 - 29844.086: 99.2071% ( 8) 00:07:02.229 29844.086 - 30045.735: 99.2537% ( 4) 00:07:02.229 34885.317 - 35086.966: 99.2654% ( 1) 00:07:02.229 35086.966 - 35288.615: 99.3237% ( 5) 00:07:02.229 35288.615 - 35490.265: 99.4053% ( 7) 00:07:02.229 35490.265 - 35691.914: 99.4869% ( 7) 00:07:02.229 35691.914 - 35893.563: 99.5802% ( 8) 00:07:02.229 35893.563 - 36095.212: 99.6852% ( 9) 00:07:02.229 36095.212 - 36296.862: 99.7785% ( 8) 00:07:02.229 36296.862 - 36498.511: 99.8368% ( 5) 00:07:02.229 36498.511 - 36700.160: 99.9534% ( 10) 00:07:02.229 36700.160 - 36901.809: 100.0000% ( 4) 00:07:02.229 00:07:02.229 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:02.229 ============================================================================== 00:07:02.229 Range in us Cumulative IO count 00:07:02.229 10132.874 - 10183.286: 0.0233% ( 2) 00:07:02.229 10183.286 - 10233.698: 0.0466% ( 2) 00:07:02.229 10233.698 - 10284.111: 0.0816% ( 3) 00:07:02.229 10284.111 - 10334.523: 0.1166% ( 3) 00:07:02.229 10334.523 - 10384.935: 0.1632% ( 4) 00:07:02.229 10384.935 - 10435.348: 0.1982% ( 3) 00:07:02.229 10435.348 - 10485.760: 0.2332% ( 3) 00:07:02.229 10485.760 - 10536.172: 0.2799% ( 4) 00:07:02.229 10536.172 - 10586.585: 0.3032% ( 2) 00:07:02.229 10586.585 - 10636.997: 0.3265% ( 2) 00:07:02.229 10636.997 - 10687.409: 0.3615% ( 3) 00:07:02.229 10687.409 - 10737.822: 0.3848% ( 2) 00:07:02.229 10737.822 - 10788.234: 0.4198% ( 3) 00:07:02.229 10788.234 - 10838.646: 0.4548% ( 3) 00:07:02.229 10838.646 - 10889.058: 0.5131% ( 5) 00:07:02.229 10889.058 - 10939.471: 0.6646% ( 13) 00:07:02.229 10939.471 - 10989.883: 0.7929% ( 11) 00:07:02.229 10989.883 - 11040.295: 0.8629% ( 6) 00:07:02.229 11040.295 - 11090.708: 0.9911% ( 11) 00:07:02.229 11090.708 - 11141.120: 1.1311% ( 12) 00:07:02.229 11141.120 - 11191.532: 1.3060% ( 15) 00:07:02.229 11191.532 - 11241.945: 1.5508% ( 21) 00:07:02.229 11241.945 - 11292.357: 1.7957% ( 21) 00:07:02.229 11292.357 - 11342.769: 1.9706% ( 15) 00:07:02.229 11342.769 - 11393.182: 2.1222% ( 13) 00:07:02.229 11393.182 - 11443.594: 2.3204% ( 17) 00:07:02.229 11443.594 - 11494.006: 2.5770% ( 22) 00:07:02.229 11494.006 - 11544.418: 2.7985% ( 19) 00:07:02.229 11544.418 - 11594.831: 3.0667% ( 23) 00:07:02.229 11594.831 - 11645.243: 3.3349% ( 23) 00:07:02.229 11645.243 - 11695.655: 3.6381% ( 26) 00:07:02.229 11695.655 - 11746.068: 3.9296% ( 25) 00:07:02.229 11746.068 - 11796.480: 4.3027% ( 32) 00:07:02.229 11796.480 - 11846.892: 4.7225% ( 36) 00:07:02.229 11846.892 - 11897.305: 5.0840% ( 31) 00:07:02.229 11897.305 - 11947.717: 5.4221% ( 29) 00:07:02.229 11947.717 - 11998.129: 5.7136% ( 25) 00:07:02.229 11998.129 - 12048.542: 6.0168% ( 26) 00:07:02.229 12048.542 - 12098.954: 6.3316% ( 27) 00:07:02.229 12098.954 - 12149.366: 6.6465% ( 27) 00:07:02.229 12149.366 - 12199.778: 6.9613% ( 27) 00:07:02.229 12199.778 - 12250.191: 7.3927% ( 37) 00:07:02.229 12250.191 - 12300.603: 7.8475% ( 39) 00:07:02.229 12300.603 - 12351.015: 8.3839% ( 46) 00:07:02.229 12351.015 - 12401.428: 8.9202% ( 46) 00:07:02.229 12401.428 - 12451.840: 9.4799% ( 48) 00:07:02.229 12451.840 - 12502.252: 9.9813% ( 43) 00:07:02.229 12502.252 - 12552.665: 10.5177% ( 46) 00:07:02.229 12552.665 - 12603.077: 11.0191% ( 43) 00:07:02.229 12603.077 - 12653.489: 11.6255% ( 52) 00:07:02.229 12653.489 - 12703.902: 12.1968% ( 49) 00:07:02.229 12703.902 - 12754.314: 12.7565% ( 48) 00:07:02.229 12754.314 - 12804.726: 13.3396% ( 50) 00:07:02.229 12804.726 - 12855.138: 13.9342% ( 51) 00:07:02.229 12855.138 - 12905.551: 14.6922% ( 65) 00:07:02.229 12905.551 - 13006.375: 16.3713% ( 144) 00:07:02.229 13006.375 - 13107.200: 18.1786% ( 155) 00:07:02.229 13107.200 - 13208.025: 20.5224% ( 201) 00:07:02.229 13208.025 - 13308.849: 23.3326% ( 241) 00:07:02.229 13308.849 - 13409.674: 26.0145% ( 230) 00:07:02.229 13409.674 - 13510.498: 28.8246% ( 241) 00:07:02.229 13510.498 - 13611.323: 31.8680% ( 261) 00:07:02.229 13611.323 - 13712.148: 35.2146% ( 287) 00:07:02.229 13712.148 - 13812.972: 38.6660% ( 296) 00:07:02.229 13812.972 - 13913.797: 41.8027% ( 269) 00:07:02.229 13913.797 - 14014.622: 44.8927% ( 265) 00:07:02.229 14014.622 - 14115.446: 48.0760% ( 273) 00:07:02.229 14115.446 - 14216.271: 50.6996% ( 225) 00:07:02.229 14216.271 - 14317.095: 53.1600% ( 211) 00:07:02.229 14317.095 - 14417.920: 55.5504% ( 205) 00:07:02.229 14417.920 - 14518.745: 57.8591% ( 198) 00:07:02.230 14518.745 - 14619.569: 60.0863% ( 191) 00:07:02.230 14619.569 - 14720.394: 61.8354% ( 150) 00:07:02.230 14720.394 - 14821.218: 63.4911% ( 142) 00:07:02.230 14821.218 - 14922.043: 65.0770% ( 136) 00:07:02.230 14922.043 - 15022.868: 66.2896% ( 104) 00:07:02.230 15022.868 - 15123.692: 67.2924% ( 86) 00:07:02.230 15123.692 - 15224.517: 68.1786% ( 76) 00:07:02.230 15224.517 - 15325.342: 68.9132% ( 63) 00:07:02.230 15325.342 - 15426.166: 69.5896% ( 58) 00:07:02.230 15426.166 - 15526.991: 70.3125% ( 62) 00:07:02.230 15526.991 - 15627.815: 71.0238% ( 61) 00:07:02.230 15627.815 - 15728.640: 71.9100% ( 76) 00:07:02.230 15728.640 - 15829.465: 72.8428% ( 80) 00:07:02.230 15829.465 - 15930.289: 73.6241% ( 67) 00:07:02.230 15930.289 - 16031.114: 74.5452% ( 79) 00:07:02.230 16031.114 - 16131.938: 75.5597% ( 87) 00:07:02.230 16131.938 - 16232.763: 76.7024% ( 98) 00:07:02.230 16232.763 - 16333.588: 77.8918% ( 102) 00:07:02.230 16333.588 - 16434.412: 78.9646% ( 92) 00:07:02.230 16434.412 - 16535.237: 79.9207% ( 82) 00:07:02.230 16535.237 - 16636.062: 80.8885% ( 83) 00:07:02.230 16636.062 - 16736.886: 81.7164% ( 71) 00:07:02.230 16736.886 - 16837.711: 82.5093% ( 68) 00:07:02.230 16837.711 - 16938.535: 83.3489% ( 72) 00:07:02.230 16938.535 - 17039.360: 84.0602% ( 61) 00:07:02.230 17039.360 - 17140.185: 84.6898% ( 54) 00:07:02.230 17140.185 - 17241.009: 85.2845% ( 51) 00:07:02.230 17241.009 - 17341.834: 85.7626% ( 41) 00:07:02.230 17341.834 - 17442.658: 86.3923% ( 54) 00:07:02.230 17442.658 - 17543.483: 87.0686% ( 58) 00:07:02.230 17543.483 - 17644.308: 87.6866% ( 53) 00:07:02.230 17644.308 - 17745.132: 88.4328% ( 64) 00:07:02.230 17745.132 - 17845.957: 88.9692% ( 46) 00:07:02.230 17845.957 - 17946.782: 89.5289% ( 48) 00:07:02.230 17946.782 - 18047.606: 90.1003% ( 49) 00:07:02.230 18047.606 - 18148.431: 90.6600% ( 48) 00:07:02.230 18148.431 - 18249.255: 91.2313% ( 49) 00:07:02.230 18249.255 - 18350.080: 91.8377% ( 52) 00:07:02.230 18350.080 - 18450.905: 92.5023% ( 57) 00:07:02.230 18450.905 - 18551.729: 93.1087% ( 52) 00:07:02.230 18551.729 - 18652.554: 93.6684% ( 48) 00:07:02.230 18652.554 - 18753.378: 94.2164% ( 47) 00:07:02.230 18753.378 - 18854.203: 94.7645% ( 47) 00:07:02.230 18854.203 - 18955.028: 95.2192% ( 39) 00:07:02.230 18955.028 - 19055.852: 95.6507% ( 37) 00:07:02.230 19055.852 - 19156.677: 96.0938% ( 38) 00:07:02.230 19156.677 - 19257.502: 96.5485% ( 39) 00:07:02.230 19257.502 - 19358.326: 96.9100% ( 31) 00:07:02.230 19358.326 - 19459.151: 97.2015% ( 25) 00:07:02.230 19459.151 - 19559.975: 97.3881% ( 16) 00:07:02.230 19559.975 - 19660.800: 97.5280% ( 12) 00:07:02.230 19660.800 - 19761.625: 97.6796% ( 13) 00:07:02.230 19761.625 - 19862.449: 97.8428% ( 14) 00:07:02.230 19862.449 - 19963.274: 97.9944% ( 13) 00:07:02.230 19963.274 - 20064.098: 98.1227% ( 11) 00:07:02.230 20064.098 - 20164.923: 98.2276% ( 9) 00:07:02.230 20164.923 - 20265.748: 98.2859% ( 5) 00:07:02.230 20265.748 - 20366.572: 98.3442% ( 5) 00:07:02.230 20366.572 - 20467.397: 98.4025% ( 5) 00:07:02.230 20467.397 - 20568.222: 98.4608% ( 5) 00:07:02.230 20568.222 - 20669.046: 98.5075% ( 4) 00:07:02.230 26416.049 - 26617.698: 98.5308% ( 2) 00:07:02.230 26617.698 - 26819.348: 98.6241% ( 8) 00:07:02.230 26819.348 - 27020.997: 98.7174% ( 8) 00:07:02.230 27020.997 - 27222.646: 98.8106% ( 8) 00:07:02.230 27222.646 - 27424.295: 98.9039% ( 8) 00:07:02.230 27424.295 - 27625.945: 98.9972% ( 8) 00:07:02.230 27625.945 - 27827.594: 99.1021% ( 9) 00:07:02.230 27827.594 - 28029.243: 99.1954% ( 8) 00:07:02.230 28029.243 - 28230.892: 99.2537% ( 5) 00:07:02.230 33877.071 - 34078.720: 99.3237% ( 6) 00:07:02.230 34078.720 - 34280.369: 99.4170% ( 8) 00:07:02.230 34280.369 - 34482.018: 99.4986% ( 7) 00:07:02.230 34482.018 - 34683.668: 99.5919% ( 8) 00:07:02.230 34683.668 - 34885.317: 99.6968% ( 9) 00:07:02.230 34885.317 - 35086.966: 99.7901% ( 8) 00:07:02.230 35086.966 - 35288.615: 99.8951% ( 9) 00:07:02.230 35288.615 - 35490.265: 99.9883% ( 8) 00:07:02.230 35490.265 - 35691.914: 100.0000% ( 1) 00:07:02.230 00:07:02.230 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:02.230 ============================================================================== 00:07:02.230 Range in us Cumulative IO count 00:07:02.230 9326.277 - 9376.689: 0.0466% ( 4) 00:07:02.230 9376.689 - 9427.102: 0.0700% ( 2) 00:07:02.230 9427.102 - 9477.514: 0.1049% ( 3) 00:07:02.230 9477.514 - 9527.926: 0.1399% ( 3) 00:07:02.230 9527.926 - 9578.338: 0.1632% ( 2) 00:07:02.230 9578.338 - 9628.751: 0.1982% ( 3) 00:07:02.230 9628.751 - 9679.163: 0.2449% ( 4) 00:07:02.230 9679.163 - 9729.575: 0.2799% ( 3) 00:07:02.230 9729.575 - 9779.988: 0.3032% ( 2) 00:07:02.230 9779.988 - 9830.400: 0.3382% ( 3) 00:07:02.230 9830.400 - 9880.812: 0.3731% ( 3) 00:07:02.230 9880.812 - 9931.225: 0.4081% ( 3) 00:07:02.230 9931.225 - 9981.637: 0.4314% ( 2) 00:07:02.230 9981.637 - 10032.049: 0.4664% ( 3) 00:07:02.230 10032.049 - 10082.462: 0.5014% ( 3) 00:07:02.230 10082.462 - 10132.874: 0.5364% ( 3) 00:07:02.230 10132.874 - 10183.286: 0.5714% ( 3) 00:07:02.230 10183.286 - 10233.698: 0.6063% ( 3) 00:07:02.230 10233.698 - 10284.111: 0.6413% ( 3) 00:07:02.230 10284.111 - 10334.523: 0.6646% ( 2) 00:07:02.230 10334.523 - 10384.935: 0.6996% ( 3) 00:07:02.230 10384.935 - 10435.348: 0.7346% ( 3) 00:07:02.230 10435.348 - 10485.760: 0.7463% ( 1) 00:07:02.230 10485.760 - 10536.172: 0.7696% ( 2) 00:07:02.230 10536.172 - 10586.585: 0.8046% ( 3) 00:07:02.230 10586.585 - 10636.997: 0.8862% ( 7) 00:07:02.230 10636.997 - 10687.409: 0.9212% ( 3) 00:07:02.230 10687.409 - 10737.822: 1.0028% ( 7) 00:07:02.230 10737.822 - 10788.234: 1.0494% ( 4) 00:07:02.230 10788.234 - 10838.646: 1.0961% ( 4) 00:07:02.230 10838.646 - 10889.058: 1.1544% ( 5) 00:07:02.230 10889.058 - 10939.471: 1.2127% ( 5) 00:07:02.231 10939.471 - 10989.883: 1.2593% ( 4) 00:07:02.231 10989.883 - 11040.295: 1.3293% ( 6) 00:07:02.231 11040.295 - 11090.708: 1.4342% ( 9) 00:07:02.231 11090.708 - 11141.120: 1.5159% ( 7) 00:07:02.231 11141.120 - 11191.532: 1.6091% ( 8) 00:07:02.231 11191.532 - 11241.945: 1.7607% ( 13) 00:07:02.231 11241.945 - 11292.357: 1.8657% ( 9) 00:07:02.231 11292.357 - 11342.769: 2.0173% ( 13) 00:07:02.231 11342.769 - 11393.182: 2.2155% ( 17) 00:07:02.231 11393.182 - 11443.594: 2.4254% ( 18) 00:07:02.231 11443.594 - 11494.006: 2.5886% ( 14) 00:07:02.231 11494.006 - 11544.418: 2.8218% ( 20) 00:07:02.231 11544.418 - 11594.831: 3.0667% ( 21) 00:07:02.231 11594.831 - 11645.243: 3.3699% ( 26) 00:07:02.231 11645.243 - 11695.655: 3.7663% ( 34) 00:07:02.231 11695.655 - 11746.068: 4.1278% ( 31) 00:07:02.231 11746.068 - 11796.480: 4.5476% ( 36) 00:07:02.231 11796.480 - 11846.892: 4.8974% ( 30) 00:07:02.231 11846.892 - 11897.305: 5.3288% ( 37) 00:07:02.231 11897.305 - 11947.717: 5.7836% ( 39) 00:07:02.231 11947.717 - 11998.129: 6.2034% ( 36) 00:07:02.231 11998.129 - 12048.542: 6.6581% ( 39) 00:07:02.231 12048.542 - 12098.954: 7.0429% ( 33) 00:07:02.231 12098.954 - 12149.366: 7.4743% ( 37) 00:07:02.231 12149.366 - 12199.778: 7.8475% ( 32) 00:07:02.231 12199.778 - 12250.191: 8.3372% ( 42) 00:07:02.231 12250.191 - 12300.603: 8.7453% ( 35) 00:07:02.231 12300.603 - 12351.015: 9.1884% ( 38) 00:07:02.231 12351.015 - 12401.428: 9.6315% ( 38) 00:07:02.231 12401.428 - 12451.840: 10.0630% ( 37) 00:07:02.231 12451.840 - 12502.252: 10.4711% ( 35) 00:07:02.231 12502.252 - 12552.665: 10.8209% ( 30) 00:07:02.231 12552.665 - 12603.077: 11.2290% ( 35) 00:07:02.231 12603.077 - 12653.489: 11.8937% ( 57) 00:07:02.231 12653.489 - 12703.902: 12.5466% ( 56) 00:07:02.231 12703.902 - 12754.314: 13.3396% ( 68) 00:07:02.231 12754.314 - 12804.726: 14.1441% ( 69) 00:07:02.231 12804.726 - 12855.138: 14.9370% ( 68) 00:07:02.231 12855.138 - 12905.551: 15.7533% ( 70) 00:07:02.231 12905.551 - 13006.375: 17.6189% ( 160) 00:07:02.231 13006.375 - 13107.200: 19.3913% ( 152) 00:07:02.231 13107.200 - 13208.025: 21.5718% ( 187) 00:07:02.231 13208.025 - 13308.849: 23.9739% ( 206) 00:07:02.231 13308.849 - 13409.674: 26.9939% ( 259) 00:07:02.231 13409.674 - 13510.498: 29.9090% ( 250) 00:07:02.231 13510.498 - 13611.323: 32.7542% ( 244) 00:07:02.231 13611.323 - 13712.148: 35.7743% ( 259) 00:07:02.231 13712.148 - 13812.972: 38.7360% ( 254) 00:07:02.231 13812.972 - 13913.797: 41.6861% ( 253) 00:07:02.231 13913.797 - 14014.622: 44.6595% ( 255) 00:07:02.231 14014.622 - 14115.446: 47.5979% ( 252) 00:07:02.231 14115.446 - 14216.271: 50.2449% ( 227) 00:07:02.231 14216.271 - 14317.095: 52.9384% ( 231) 00:07:02.231 14317.095 - 14417.920: 55.5154% ( 221) 00:07:02.231 14417.920 - 14518.745: 57.8825% ( 203) 00:07:02.231 14518.745 - 14619.569: 59.7948% ( 164) 00:07:02.231 14619.569 - 14720.394: 61.4855% ( 145) 00:07:02.231 14720.394 - 14821.218: 63.0597% ( 135) 00:07:02.231 14821.218 - 14922.043: 64.3074% ( 107) 00:07:02.231 14922.043 - 15022.868: 65.5434% ( 106) 00:07:02.231 15022.868 - 15123.692: 66.6628% ( 96) 00:07:02.231 15123.692 - 15224.517: 67.7589% ( 94) 00:07:02.231 15224.517 - 15325.342: 68.9132% ( 99) 00:07:02.231 15325.342 - 15426.166: 69.8577% ( 81) 00:07:02.231 15426.166 - 15526.991: 70.8605% ( 86) 00:07:02.231 15526.991 - 15627.815: 72.0382% ( 101) 00:07:02.231 15627.815 - 15728.640: 73.1110% ( 92) 00:07:02.231 15728.640 - 15829.465: 74.1371% ( 88) 00:07:02.231 15829.465 - 15930.289: 75.0583% ( 79) 00:07:02.231 15930.289 - 16031.114: 75.7812% ( 62) 00:07:02.231 16031.114 - 16131.938: 76.4809% ( 60) 00:07:02.231 16131.938 - 16232.763: 77.2621% ( 67) 00:07:02.231 16232.763 - 16333.588: 78.1367% ( 75) 00:07:02.231 16333.588 - 16434.412: 79.0578% ( 79) 00:07:02.231 16434.412 - 16535.237: 79.8391% ( 67) 00:07:02.231 16535.237 - 16636.062: 80.5504% ( 61) 00:07:02.231 16636.062 - 16736.886: 81.1567% ( 52) 00:07:02.231 16736.886 - 16837.711: 81.8447% ( 59) 00:07:02.231 16837.711 - 16938.535: 82.5326% ( 59) 00:07:02.231 16938.535 - 17039.360: 83.3256% ( 68) 00:07:02.231 17039.360 - 17140.185: 84.0019% ( 58) 00:07:02.231 17140.185 - 17241.009: 84.6898% ( 59) 00:07:02.231 17241.009 - 17341.834: 85.4128% ( 62) 00:07:02.231 17341.834 - 17442.658: 86.2640% ( 73) 00:07:02.231 17442.658 - 17543.483: 86.9753% ( 61) 00:07:02.231 17543.483 - 17644.308: 87.6516% ( 58) 00:07:02.231 17644.308 - 17745.132: 88.3745% ( 62) 00:07:02.231 17745.132 - 17845.957: 89.0742% ( 60) 00:07:02.231 17845.957 - 17946.782: 89.7971% ( 62) 00:07:02.231 17946.782 - 18047.606: 90.4035% ( 52) 00:07:02.231 18047.606 - 18148.431: 90.9748% ( 49) 00:07:02.231 18148.431 - 18249.255: 91.5345% ( 48) 00:07:02.231 18249.255 - 18350.080: 92.0592% ( 45) 00:07:02.231 18350.080 - 18450.905: 92.5490% ( 42) 00:07:02.231 18450.905 - 18551.729: 93.0387% ( 42) 00:07:02.231 18551.729 - 18652.554: 93.5518% ( 44) 00:07:02.231 18652.554 - 18753.378: 93.9132% ( 31) 00:07:02.231 18753.378 - 18854.203: 94.2864% ( 32) 00:07:02.231 18854.203 - 18955.028: 94.6362% ( 30) 00:07:02.231 18955.028 - 19055.852: 95.0793% ( 38) 00:07:02.231 19055.852 - 19156.677: 95.4641% ( 33) 00:07:02.231 19156.677 - 19257.502: 95.8139% ( 30) 00:07:02.231 19257.502 - 19358.326: 96.0938% ( 24) 00:07:02.231 19358.326 - 19459.151: 96.3386% ( 21) 00:07:02.231 19459.151 - 19559.975: 96.5485% ( 18) 00:07:02.231 19559.975 - 19660.800: 96.7701% ( 19) 00:07:02.231 19660.800 - 19761.625: 96.9799% ( 18) 00:07:02.231 19761.625 - 19862.449: 97.1782% ( 17) 00:07:02.231 19862.449 - 19963.274: 97.3764% ( 17) 00:07:02.231 19963.274 - 20064.098: 97.5630% ( 16) 00:07:02.231 20064.098 - 20164.923: 97.7146% ( 13) 00:07:02.231 20164.923 - 20265.748: 97.7962% ( 7) 00:07:02.231 20265.748 - 20366.572: 97.8895% ( 8) 00:07:02.232 20366.572 - 20467.397: 97.9944% ( 9) 00:07:02.232 20467.397 - 20568.222: 98.0877% ( 8) 00:07:02.232 20568.222 - 20669.046: 98.1926% ( 9) 00:07:02.232 20669.046 - 20769.871: 98.2743% ( 7) 00:07:02.232 20769.871 - 20870.695: 98.3442% ( 6) 00:07:02.232 20870.695 - 20971.520: 98.4142% ( 6) 00:07:02.232 20971.520 - 21072.345: 98.4608% ( 4) 00:07:02.232 21072.345 - 21173.169: 98.4958% ( 3) 00:07:02.232 21173.169 - 21273.994: 98.5075% ( 1) 00:07:02.232 25609.452 - 25710.277: 98.5424% ( 3) 00:07:02.232 25710.277 - 25811.102: 98.5891% ( 4) 00:07:02.232 25811.102 - 26012.751: 98.6824% ( 8) 00:07:02.232 26012.751 - 26214.400: 98.7873% ( 9) 00:07:02.232 26214.400 - 26416.049: 98.8806% ( 8) 00:07:02.232 26416.049 - 26617.698: 98.9739% ( 8) 00:07:02.232 26617.698 - 26819.348: 99.0672% ( 8) 00:07:02.232 26819.348 - 27020.997: 99.1604% ( 8) 00:07:02.232 27020.997 - 27222.646: 99.2537% ( 8) 00:07:02.232 33070.474 - 33272.123: 99.2654% ( 1) 00:07:02.232 33272.123 - 33473.772: 99.3470% ( 7) 00:07:02.232 33473.772 - 33675.422: 99.4403% ( 8) 00:07:02.232 33675.422 - 33877.071: 99.5336% ( 8) 00:07:02.232 33877.071 - 34078.720: 99.6269% ( 8) 00:07:02.232 34078.720 - 34280.369: 99.7201% ( 8) 00:07:02.232 34280.369 - 34482.018: 99.8251% ( 9) 00:07:02.232 34482.018 - 34683.668: 99.9184% ( 8) 00:07:02.232 34683.668 - 34885.317: 100.0000% ( 7) 00:07:02.232 00:07:02.232 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:02.232 ============================================================================== 00:07:02.232 Range in us Cumulative IO count 00:07:02.232 8922.978 - 8973.391: 0.0117% ( 1) 00:07:02.232 8973.391 - 9023.803: 0.0233% ( 1) 00:07:02.232 9023.803 - 9074.215: 0.0700% ( 4) 00:07:02.232 9074.215 - 9124.628: 0.1049% ( 3) 00:07:02.232 9124.628 - 9175.040: 0.1283% ( 2) 00:07:02.232 9175.040 - 9225.452: 0.1632% ( 3) 00:07:02.232 9225.452 - 9275.865: 0.1866% ( 2) 00:07:02.232 9275.865 - 9326.277: 0.2215% ( 3) 00:07:02.232 9326.277 - 9376.689: 0.2682% ( 4) 00:07:02.232 9376.689 - 9427.102: 0.2915% ( 2) 00:07:02.232 9427.102 - 9477.514: 0.3265% ( 3) 00:07:02.232 9477.514 - 9527.926: 0.3498% ( 2) 00:07:02.232 9527.926 - 9578.338: 0.3848% ( 3) 00:07:02.232 9578.338 - 9628.751: 0.4314% ( 4) 00:07:02.232 9628.751 - 9679.163: 0.4781% ( 4) 00:07:02.232 9679.163 - 9729.575: 0.5364% ( 5) 00:07:02.232 9729.575 - 9779.988: 0.5714% ( 3) 00:07:02.232 9779.988 - 9830.400: 0.6063% ( 3) 00:07:02.232 9830.400 - 9880.812: 0.6297% ( 2) 00:07:02.232 9880.812 - 9931.225: 0.6646% ( 3) 00:07:02.232 9931.225 - 9981.637: 0.6880% ( 2) 00:07:02.232 9981.637 - 10032.049: 0.7229% ( 3) 00:07:02.232 10032.049 - 10082.462: 0.8162% ( 8) 00:07:02.232 10082.462 - 10132.874: 0.8512% ( 3) 00:07:02.232 10132.874 - 10183.286: 0.8629% ( 1) 00:07:02.232 10183.286 - 10233.698: 0.8979% ( 3) 00:07:02.232 10233.698 - 10284.111: 0.9212% ( 2) 00:07:02.232 10284.111 - 10334.523: 0.9562% ( 3) 00:07:02.232 10334.523 - 10384.935: 0.9911% ( 3) 00:07:02.232 10384.935 - 10435.348: 1.0261% ( 3) 00:07:02.232 10435.348 - 10485.760: 1.0611% ( 3) 00:07:02.232 10485.760 - 10536.172: 1.0844% ( 2) 00:07:02.232 10536.172 - 10586.585: 1.1311% ( 4) 00:07:02.232 10586.585 - 10636.997: 1.2010% ( 6) 00:07:02.232 10636.997 - 10687.409: 1.2477% ( 4) 00:07:02.232 10687.409 - 10737.822: 1.3176% ( 6) 00:07:02.232 10737.822 - 10788.234: 1.3759% ( 5) 00:07:02.232 10788.234 - 10838.646: 1.4342% ( 5) 00:07:02.232 10838.646 - 10889.058: 1.5159% ( 7) 00:07:02.232 10889.058 - 10939.471: 1.5742% ( 5) 00:07:02.232 10939.471 - 10989.883: 1.6208% ( 4) 00:07:02.232 10989.883 - 11040.295: 1.7491% ( 11) 00:07:02.232 11040.295 - 11090.708: 1.8657% ( 10) 00:07:02.232 11090.708 - 11141.120: 1.9590% ( 8) 00:07:02.232 11141.120 - 11191.532: 2.0522% ( 8) 00:07:02.232 11191.532 - 11241.945: 2.1572% ( 9) 00:07:02.232 11241.945 - 11292.357: 2.4137% ( 22) 00:07:02.232 11292.357 - 11342.769: 2.6236% ( 18) 00:07:02.232 11342.769 - 11393.182: 2.8685% ( 21) 00:07:02.232 11393.182 - 11443.594: 3.0667% ( 17) 00:07:02.232 11443.594 - 11494.006: 3.2533% ( 16) 00:07:02.232 11494.006 - 11544.418: 3.4865% ( 20) 00:07:02.232 11544.418 - 11594.831: 3.7547% ( 23) 00:07:02.232 11594.831 - 11645.243: 4.0229% ( 23) 00:07:02.232 11645.243 - 11695.655: 4.2910% ( 23) 00:07:02.232 11695.655 - 11746.068: 4.5592% ( 23) 00:07:02.232 11746.068 - 11796.480: 4.8158% ( 22) 00:07:02.232 11796.480 - 11846.892: 5.0490% ( 20) 00:07:02.232 11846.892 - 11897.305: 5.3055% ( 22) 00:07:02.232 11897.305 - 11947.717: 5.5271% ( 19) 00:07:02.232 11947.717 - 11998.129: 5.7603% ( 20) 00:07:02.232 11998.129 - 12048.542: 6.0751% ( 27) 00:07:02.232 12048.542 - 12098.954: 6.3899% ( 27) 00:07:02.232 12098.954 - 12149.366: 6.6814% ( 25) 00:07:02.232 12149.366 - 12199.778: 7.0196% ( 29) 00:07:02.232 12199.778 - 12250.191: 7.3228% ( 26) 00:07:02.232 12250.191 - 12300.603: 7.6493% ( 28) 00:07:02.232 12300.603 - 12351.015: 8.0690% ( 36) 00:07:02.232 12351.015 - 12401.428: 8.5005% ( 37) 00:07:02.232 12401.428 - 12451.840: 8.8736% ( 32) 00:07:02.232 12451.840 - 12502.252: 9.3750% ( 43) 00:07:02.232 12502.252 - 12552.665: 9.8064% ( 37) 00:07:02.232 12552.665 - 12603.077: 10.4478% ( 55) 00:07:02.232 12603.077 - 12653.489: 11.0891% ( 55) 00:07:02.232 12653.489 - 12703.902: 11.8237% ( 63) 00:07:02.232 12703.902 - 12754.314: 12.5583% ( 63) 00:07:02.232 12754.314 - 12804.726: 13.2929% ( 63) 00:07:02.232 12804.726 - 12855.138: 14.1558% ( 74) 00:07:02.232 12855.138 - 12905.551: 15.0536% ( 77) 00:07:02.232 12905.551 - 13006.375: 16.7677% ( 147) 00:07:02.232 13006.375 - 13107.200: 18.9132% ( 184) 00:07:02.232 13107.200 - 13208.025: 21.4902% ( 221) 00:07:02.232 13208.025 - 13308.849: 24.2421% ( 236) 00:07:02.232 13308.849 - 13409.674: 26.9823% ( 235) 00:07:02.232 13409.674 - 13510.498: 29.8507% ( 246) 00:07:02.232 13510.498 - 13611.323: 32.6726% ( 242) 00:07:02.232 13611.323 - 13712.148: 35.5061% ( 243) 00:07:02.232 13712.148 - 13812.972: 38.5028% ( 257) 00:07:02.233 13812.972 - 13913.797: 41.8610% ( 288) 00:07:02.233 13913.797 - 14014.622: 45.0326% ( 272) 00:07:02.233 14014.622 - 14115.446: 47.7962% ( 237) 00:07:02.233 14115.446 - 14216.271: 50.1516% ( 202) 00:07:02.233 14216.271 - 14317.095: 52.2971% ( 184) 00:07:02.233 14317.095 - 14417.920: 54.2794% ( 170) 00:07:02.233 14417.920 - 14518.745: 56.2267% ( 167) 00:07:02.233 14518.745 - 14619.569: 58.0457% ( 156) 00:07:02.233 14619.569 - 14720.394: 59.7715% ( 148) 00:07:02.233 14720.394 - 14821.218: 61.3689% ( 137) 00:07:02.233 14821.218 - 14922.043: 62.8032% ( 123) 00:07:02.233 14922.043 - 15022.868: 64.2024% ( 120) 00:07:02.233 15022.868 - 15123.692: 65.6017% ( 120) 00:07:02.233 15123.692 - 15224.517: 66.9543% ( 116) 00:07:02.233 15224.517 - 15325.342: 68.2486% ( 111) 00:07:02.233 15325.342 - 15426.166: 69.3913% ( 98) 00:07:02.233 15426.166 - 15526.991: 70.5690% ( 101) 00:07:02.233 15526.991 - 15627.815: 71.6768% ( 95) 00:07:02.233 15627.815 - 15728.640: 72.7962% ( 96) 00:07:02.233 15728.640 - 15829.465: 73.7290% ( 80) 00:07:02.233 15829.465 - 15930.289: 74.7551% ( 88) 00:07:02.233 15930.289 - 16031.114: 75.8162% ( 91) 00:07:02.233 16031.114 - 16131.938: 76.8890% ( 92) 00:07:02.233 16131.938 - 16232.763: 77.9618% ( 92) 00:07:02.233 16232.763 - 16333.588: 78.9762% ( 87) 00:07:02.233 16333.588 - 16434.412: 79.9557% ( 84) 00:07:02.233 16434.412 - 16535.237: 80.9002% ( 81) 00:07:02.233 16535.237 - 16636.062: 81.7631% ( 74) 00:07:02.233 16636.062 - 16736.886: 82.6842% ( 79) 00:07:02.233 16736.886 - 16837.711: 83.7337% ( 90) 00:07:02.233 16837.711 - 16938.535: 84.6199% ( 76) 00:07:02.233 16938.535 - 17039.360: 85.4011% ( 67) 00:07:02.233 17039.360 - 17140.185: 86.0774% ( 58) 00:07:02.233 17140.185 - 17241.009: 86.6371% ( 48) 00:07:02.233 17241.009 - 17341.834: 87.1852% ( 47) 00:07:02.233 17341.834 - 17442.658: 87.6632% ( 41) 00:07:02.233 17442.658 - 17543.483: 88.1530% ( 42) 00:07:02.233 17543.483 - 17644.308: 88.5728% ( 36) 00:07:02.233 17644.308 - 17745.132: 88.8643% ( 25) 00:07:02.233 17745.132 - 17845.957: 89.1208% ( 22) 00:07:02.233 17845.957 - 17946.782: 89.3773% ( 22) 00:07:02.233 17946.782 - 18047.606: 89.6688% ( 25) 00:07:02.233 18047.606 - 18148.431: 90.0886% ( 36) 00:07:02.233 18148.431 - 18249.255: 90.4967% ( 35) 00:07:02.233 18249.255 - 18350.080: 91.0564% ( 48) 00:07:02.233 18350.080 - 18450.905: 91.5229% ( 40) 00:07:02.233 18450.905 - 18551.729: 92.0476% ( 45) 00:07:02.233 18551.729 - 18652.554: 92.5606% ( 44) 00:07:02.233 18652.554 - 18753.378: 93.1203% ( 48) 00:07:02.233 18753.378 - 18854.203: 93.7034% ( 50) 00:07:02.233 18854.203 - 18955.028: 94.2980% ( 51) 00:07:02.233 18955.028 - 19055.852: 94.8694% ( 49) 00:07:02.233 19055.852 - 19156.677: 95.4174% ( 47) 00:07:02.233 19156.677 - 19257.502: 95.8955% ( 41) 00:07:02.233 19257.502 - 19358.326: 96.2920% ( 34) 00:07:02.233 19358.326 - 19459.151: 96.5718% ( 24) 00:07:02.233 19459.151 - 19559.975: 96.7934% ( 19) 00:07:02.233 19559.975 - 19660.800: 96.9799% ( 16) 00:07:02.233 19660.800 - 19761.625: 97.2248% ( 21) 00:07:02.233 19761.625 - 19862.449: 97.4230% ( 17) 00:07:02.233 19862.449 - 19963.274: 97.6446% ( 19) 00:07:02.233 19963.274 - 20064.098: 97.8428% ( 17) 00:07:02.233 20064.098 - 20164.923: 97.9478% ( 9) 00:07:02.233 20164.923 - 20265.748: 98.0644% ( 10) 00:07:02.233 20265.748 - 20366.572: 98.1810% ( 10) 00:07:02.233 20366.572 - 20467.397: 98.2976% ( 10) 00:07:02.233 20467.397 - 20568.222: 98.3909% ( 8) 00:07:02.233 20568.222 - 20669.046: 98.4608% ( 6) 00:07:02.233 20669.046 - 20769.871: 98.5075% ( 4) 00:07:02.233 24097.083 - 24197.908: 98.5424% ( 3) 00:07:02.233 24197.908 - 24298.732: 98.5891% ( 4) 00:07:02.233 24298.732 - 24399.557: 98.6357% ( 4) 00:07:02.233 24399.557 - 24500.382: 98.6824% ( 4) 00:07:02.233 24500.382 - 24601.206: 98.7407% ( 5) 00:07:02.233 24601.206 - 24702.031: 98.7873% ( 4) 00:07:02.233 24702.031 - 24802.855: 98.8340% ( 4) 00:07:02.233 24802.855 - 24903.680: 98.8806% ( 4) 00:07:02.233 24903.680 - 25004.505: 98.9389% ( 5) 00:07:02.233 25004.505 - 25105.329: 98.9855% ( 4) 00:07:02.233 25105.329 - 25206.154: 99.0205% ( 3) 00:07:02.233 25206.154 - 25306.978: 99.0438% ( 2) 00:07:02.233 25306.978 - 25407.803: 99.0905% ( 4) 00:07:02.233 25407.803 - 25508.628: 99.1371% ( 4) 00:07:02.233 25508.628 - 25609.452: 99.1954% ( 5) 00:07:02.233 25609.452 - 25710.277: 99.2304% ( 3) 00:07:02.233 25710.277 - 25811.102: 99.2537% ( 2) 00:07:02.233 31658.929 - 31860.578: 99.2887% ( 3) 00:07:02.234 31860.578 - 32062.228: 99.3820% ( 8) 00:07:02.234 32062.228 - 32263.877: 99.4753% ( 8) 00:07:02.234 32263.877 - 32465.526: 99.5686% ( 8) 00:07:02.234 32465.526 - 32667.175: 99.6618% ( 8) 00:07:02.234 32667.175 - 32868.825: 99.7551% ( 8) 00:07:02.234 32868.825 - 33070.474: 99.8484% ( 8) 00:07:02.234 33070.474 - 33272.123: 99.9417% ( 8) 00:07:02.234 33272.123 - 33473.772: 100.0000% ( 5) 00:07:02.234 00:07:02.234 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:02.234 ============================================================================== 00:07:02.234 Range in us Cumulative IO count 00:07:02.234 8822.154 - 8872.566: 0.0116% ( 1) 00:07:02.234 8872.566 - 8922.978: 0.0347% ( 2) 00:07:02.234 8922.978 - 8973.391: 0.0694% ( 3) 00:07:02.234 8973.391 - 9023.803: 0.1042% ( 3) 00:07:02.234 9023.803 - 9074.215: 0.1389% ( 3) 00:07:02.234 9074.215 - 9124.628: 0.1736% ( 3) 00:07:02.234 9124.628 - 9175.040: 0.1968% ( 2) 00:07:02.234 9175.040 - 9225.452: 0.2315% ( 3) 00:07:02.234 9225.452 - 9275.865: 0.2662% ( 3) 00:07:02.234 9275.865 - 9326.277: 0.3009% ( 3) 00:07:02.234 9326.277 - 9376.689: 0.3241% ( 2) 00:07:02.234 9376.689 - 9427.102: 0.3588% ( 3) 00:07:02.234 9427.102 - 9477.514: 0.3935% ( 3) 00:07:02.234 9477.514 - 9527.926: 0.4167% ( 2) 00:07:02.234 9527.926 - 9578.338: 0.4514% ( 3) 00:07:02.234 9578.338 - 9628.751: 0.4745% ( 2) 00:07:02.234 9628.751 - 9679.163: 0.5093% ( 3) 00:07:02.234 9679.163 - 9729.575: 0.5440% ( 3) 00:07:02.234 9729.575 - 9779.988: 0.5787% ( 3) 00:07:02.234 9779.988 - 9830.400: 0.6019% ( 2) 00:07:02.234 9830.400 - 9880.812: 0.6366% ( 3) 00:07:02.234 9880.812 - 9931.225: 0.6713% ( 3) 00:07:02.234 9931.225 - 9981.637: 0.7060% ( 3) 00:07:02.234 9981.637 - 10032.049: 0.7292% ( 2) 00:07:02.234 10032.049 - 10082.462: 0.7407% ( 1) 00:07:02.234 10132.874 - 10183.286: 0.7523% ( 1) 00:07:02.234 10183.286 - 10233.698: 0.7986% ( 4) 00:07:02.234 10233.698 - 10284.111: 0.8333% ( 3) 00:07:02.234 10284.111 - 10334.523: 0.8796% ( 4) 00:07:02.234 10334.523 - 10384.935: 0.9144% ( 3) 00:07:02.234 10384.935 - 10435.348: 0.9491% ( 3) 00:07:02.234 10435.348 - 10485.760: 0.9954% ( 4) 00:07:02.234 10485.760 - 10536.172: 1.0301% ( 3) 00:07:02.234 10536.172 - 10586.585: 1.0532% ( 2) 00:07:02.234 10586.585 - 10636.997: 1.0995% ( 4) 00:07:02.234 10636.997 - 10687.409: 1.1921% ( 8) 00:07:02.234 10687.409 - 10737.822: 1.2963% ( 9) 00:07:02.234 10737.822 - 10788.234: 1.3657% ( 6) 00:07:02.234 10788.234 - 10838.646: 1.4352% ( 6) 00:07:02.234 10838.646 - 10889.058: 1.6551% ( 19) 00:07:02.234 10889.058 - 10939.471: 1.7824% ( 11) 00:07:02.234 10939.471 - 10989.883: 1.9097% ( 11) 00:07:02.234 10989.883 - 11040.295: 2.0718% ( 14) 00:07:02.234 11040.295 - 11090.708: 2.2685% ( 17) 00:07:02.234 11090.708 - 11141.120: 2.4769% ( 18) 00:07:02.234 11141.120 - 11191.532: 2.6505% ( 15) 00:07:02.234 11191.532 - 11241.945: 2.8241% ( 15) 00:07:02.234 11241.945 - 11292.357: 3.0324% ( 18) 00:07:02.234 11292.357 - 11342.769: 3.2755% ( 21) 00:07:02.234 11342.769 - 11393.182: 3.4954% ( 19) 00:07:02.234 11393.182 - 11443.594: 3.6921% ( 17) 00:07:02.234 11443.594 - 11494.006: 3.8542% ( 14) 00:07:02.234 11494.006 - 11544.418: 4.0972% ( 21) 00:07:02.234 11544.418 - 11594.831: 4.3519% ( 22) 00:07:02.234 11594.831 - 11645.243: 4.6412% ( 25) 00:07:02.234 11645.243 - 11695.655: 4.8611% ( 19) 00:07:02.234 11695.655 - 11746.068: 5.1273% ( 23) 00:07:02.234 11746.068 - 11796.480: 5.3472% ( 19) 00:07:02.234 11796.480 - 11846.892: 5.5787% ( 20) 00:07:02.234 11846.892 - 11897.305: 5.8102% ( 20) 00:07:02.234 11897.305 - 11947.717: 6.0880% ( 24) 00:07:02.234 11947.717 - 11998.129: 6.4005% ( 27) 00:07:02.234 11998.129 - 12048.542: 6.6782% ( 24) 00:07:02.234 12048.542 - 12098.954: 7.0255% ( 30) 00:07:02.234 12098.954 - 12149.366: 7.4421% ( 36) 00:07:02.234 12149.366 - 12199.778: 7.8241% ( 33) 00:07:02.234 12199.778 - 12250.191: 8.1597% ( 29) 00:07:02.234 12250.191 - 12300.603: 8.5069% ( 30) 00:07:02.234 12300.603 - 12351.015: 8.9236% ( 36) 00:07:02.234 12351.015 - 12401.428: 9.4444% ( 45) 00:07:02.234 12401.428 - 12451.840: 9.9421% ( 43) 00:07:02.234 12451.840 - 12502.252: 10.4051% ( 40) 00:07:02.234 12502.252 - 12552.665: 10.8796% ( 41) 00:07:02.234 12552.665 - 12603.077: 11.4468% ( 49) 00:07:02.234 12603.077 - 12653.489: 12.1412% ( 60) 00:07:02.234 12653.489 - 12703.902: 12.8356% ( 60) 00:07:02.234 12703.902 - 12754.314: 13.5995% ( 66) 00:07:02.234 12754.314 - 12804.726: 14.4676% ( 75) 00:07:02.234 12804.726 - 12855.138: 15.2546% ( 68) 00:07:02.234 12855.138 - 12905.551: 16.1574% ( 78) 00:07:02.234 12905.551 - 13006.375: 18.0440% ( 163) 00:07:02.234 13006.375 - 13107.200: 20.2315% ( 189) 00:07:02.234 13107.200 - 13208.025: 22.3380% ( 182) 00:07:02.234 13208.025 - 13308.849: 24.7222% ( 206) 00:07:02.234 13308.849 - 13409.674: 27.3380% ( 226) 00:07:02.234 13409.674 - 13510.498: 30.4630% ( 270) 00:07:02.234 13510.498 - 13611.323: 33.7269% ( 282) 00:07:02.234 13611.323 - 13712.148: 36.8403% ( 269) 00:07:02.234 13712.148 - 13812.972: 39.8032% ( 256) 00:07:02.234 13812.972 - 13913.797: 42.3843% ( 223) 00:07:02.234 13913.797 - 14014.622: 44.8495% ( 213) 00:07:02.234 14014.622 - 14115.446: 47.4653% ( 226) 00:07:02.234 14115.446 - 14216.271: 50.0926% ( 227) 00:07:02.234 14216.271 - 14317.095: 52.4653% ( 205) 00:07:02.234 14317.095 - 14417.920: 54.4560% ( 172) 00:07:02.234 14417.920 - 14518.745: 56.2153% ( 152) 00:07:02.234 14518.745 - 14619.569: 57.6505% ( 124) 00:07:02.234 14619.569 - 14720.394: 58.8773% ( 106) 00:07:02.234 14720.394 - 14821.218: 60.3241% ( 125) 00:07:02.234 14821.218 - 14922.043: 61.7245% ( 121) 00:07:02.234 14922.043 - 15022.868: 63.1134% ( 120) 00:07:02.234 15022.868 - 15123.692: 64.6065% ( 129) 00:07:02.234 15123.692 - 15224.517: 65.9954% ( 120) 00:07:02.234 15224.517 - 15325.342: 67.3495% ( 117) 00:07:02.234 15325.342 - 15426.166: 68.8426% ( 129) 00:07:02.234 15426.166 - 15526.991: 70.1620% ( 114) 00:07:02.234 15526.991 - 15627.815: 71.5856% ( 123) 00:07:02.234 15627.815 - 15728.640: 72.8472% ( 109) 00:07:02.234 15728.640 - 15829.465: 74.1667% ( 114) 00:07:02.234 15829.465 - 15930.289: 75.4514% ( 111) 00:07:02.234 15930.289 - 16031.114: 76.6551% ( 104) 00:07:02.234 16031.114 - 16131.938: 77.5116% ( 74) 00:07:02.234 16131.938 - 16232.763: 78.4954% ( 85) 00:07:02.234 16232.763 - 16333.588: 79.3750% ( 76) 00:07:02.234 16333.588 - 16434.412: 80.2778% ( 78) 00:07:02.234 16434.412 - 16535.237: 81.2153% ( 81) 00:07:02.234 16535.237 - 16636.062: 82.0949% ( 76) 00:07:02.234 16636.062 - 16736.886: 83.0556% ( 83) 00:07:02.234 16736.886 - 16837.711: 83.7847% ( 63) 00:07:02.234 16837.711 - 16938.535: 84.5486% ( 66) 00:07:02.234 16938.535 - 17039.360: 85.3588% ( 70) 00:07:02.234 17039.360 - 17140.185: 86.2616% ( 78) 00:07:02.235 17140.185 - 17241.009: 87.0023% ( 64) 00:07:02.235 17241.009 - 17341.834: 87.5694% ( 49) 00:07:02.235 17341.834 - 17442.658: 88.0787% ( 44) 00:07:02.235 17442.658 - 17543.483: 88.5069% ( 37) 00:07:02.235 17543.483 - 17644.308: 88.9120% ( 35) 00:07:02.235 17644.308 - 17745.132: 89.3056% ( 34) 00:07:02.235 17745.132 - 17845.957: 89.7917% ( 42) 00:07:02.235 17845.957 - 17946.782: 90.2778% ( 42) 00:07:02.235 17946.782 - 18047.606: 90.6944% ( 36) 00:07:02.235 18047.606 - 18148.431: 91.0764% ( 33) 00:07:02.235 18148.431 - 18249.255: 91.5278% ( 39) 00:07:02.235 18249.255 - 18350.080: 91.9560% ( 37) 00:07:02.235 18350.080 - 18450.905: 92.3264% ( 32) 00:07:02.235 18450.905 - 18551.729: 92.7662% ( 38) 00:07:02.235 18551.729 - 18652.554: 93.2407% ( 41) 00:07:02.235 18652.554 - 18753.378: 93.7500% ( 44) 00:07:02.235 18753.378 - 18854.203: 94.2245% ( 41) 00:07:02.235 18854.203 - 18955.028: 94.7685% ( 47) 00:07:02.235 18955.028 - 19055.852: 95.2662% ( 43) 00:07:02.235 19055.852 - 19156.677: 95.6250% ( 31) 00:07:02.235 19156.677 - 19257.502: 95.9606% ( 29) 00:07:02.235 19257.502 - 19358.326: 96.3773% ( 36) 00:07:02.235 19358.326 - 19459.151: 96.8056% ( 37) 00:07:02.235 19459.151 - 19559.975: 97.1991% ( 34) 00:07:02.235 19559.975 - 19660.800: 97.5926% ( 34) 00:07:02.235 19660.800 - 19761.625: 97.9630% ( 32) 00:07:02.235 19761.625 - 19862.449: 98.3333% ( 32) 00:07:02.235 19862.449 - 19963.274: 98.6574% ( 28) 00:07:02.235 19963.274 - 20064.098: 98.9236% ( 23) 00:07:02.235 20064.098 - 20164.923: 99.1204% ( 17) 00:07:02.235 20164.923 - 20265.748: 99.2361% ( 10) 00:07:02.235 20265.748 - 20366.572: 99.2593% ( 2) 00:07:02.235 24097.083 - 24197.908: 99.3056% ( 4) 00:07:02.235 24197.908 - 24298.732: 99.3519% ( 4) 00:07:02.235 24298.732 - 24399.557: 99.3981% ( 4) 00:07:02.235 24399.557 - 24500.382: 99.4444% ( 4) 00:07:02.235 24500.382 - 24601.206: 99.4907% ( 4) 00:07:02.235 24601.206 - 24702.031: 99.5370% ( 4) 00:07:02.235 24702.031 - 24802.855: 99.5833% ( 4) 00:07:02.235 24802.855 - 24903.680: 99.6296% ( 4) 00:07:02.235 24903.680 - 25004.505: 99.6875% ( 5) 00:07:02.235 25004.505 - 25105.329: 99.7338% ( 4) 00:07:02.235 25105.329 - 25206.154: 99.7801% ( 4) 00:07:02.235 25206.154 - 25306.978: 99.8264% ( 4) 00:07:02.235 25306.978 - 25407.803: 99.8727% ( 4) 00:07:02.235 25407.803 - 25508.628: 99.9306% ( 5) 00:07:02.235 25508.628 - 25609.452: 99.9769% ( 4) 00:07:02.235 25609.452 - 25710.277: 100.0000% ( 2) 00:07:02.235 00:07:02.235 23:44:34 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:03.610 Initializing NVMe Controllers 00:07:03.610 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:03.610 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:03.610 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:03.610 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:03.610 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:03.610 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:03.610 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:03.610 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:03.610 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:03.610 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:03.610 Initialization complete. Launching workers. 00:07:03.610 ======================================================== 00:07:03.610 Latency(us) 00:07:03.610 Device Information : IOPS MiB/s Average min max 00:07:03.610 PCIE (0000:00:13.0) NSID 1 from core 0: 10085.03 118.18 12717.67 9084.58 36910.38 00:07:03.610 PCIE (0000:00:10.0) NSID 1 from core 0: 10085.03 118.18 12698.56 8909.39 35605.60 00:07:03.610 PCIE (0000:00:11.0) NSID 1 from core 0: 10085.03 118.18 12678.92 8906.00 33863.47 00:07:03.611 PCIE (0000:00:12.0) NSID 1 from core 0: 10085.03 118.18 12659.09 9055.68 33337.37 00:07:03.611 PCIE (0000:00:12.0) NSID 2 from core 0: 10085.03 118.18 12639.99 9029.10 31594.84 00:07:03.611 PCIE (0000:00:12.0) NSID 3 from core 0: 10148.86 118.93 12541.14 9190.39 24235.97 00:07:03.611 ======================================================== 00:07:03.611 Total : 60574.03 709.85 12655.77 8906.00 36910.38 00:07:03.611 00:07:03.611 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:03.611 ================================================================================= 00:07:03.611 1.00000% : 9477.514us 00:07:03.611 10.00000% : 10132.874us 00:07:03.611 25.00000% : 10989.883us 00:07:03.611 50.00000% : 12098.954us 00:07:03.611 75.00000% : 13812.972us 00:07:03.611 90.00000% : 15526.991us 00:07:03.611 95.00000% : 16434.412us 00:07:03.611 98.00000% : 18047.606us 00:07:03.611 99.00000% : 28432.542us 00:07:03.611 99.50000% : 35490.265us 00:07:03.611 99.90000% : 36700.160us 00:07:03.611 99.99000% : 36901.809us 00:07:03.611 99.99900% : 37103.458us 00:07:03.611 99.99990% : 37103.458us 00:07:03.611 99.99999% : 37103.458us 00:07:03.611 00:07:03.611 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:03.611 ================================================================================= 00:07:03.611 1.00000% : 9275.865us 00:07:03.611 10.00000% : 10082.462us 00:07:03.611 25.00000% : 10989.883us 00:07:03.611 50.00000% : 12300.603us 00:07:03.611 75.00000% : 13812.972us 00:07:03.611 90.00000% : 15526.991us 00:07:03.611 95.00000% : 16434.412us 00:07:03.611 98.00000% : 18551.729us 00:07:03.611 99.00000% : 27222.646us 00:07:03.611 99.50000% : 34078.720us 00:07:03.611 99.90000% : 35288.615us 00:07:03.611 99.99000% : 35691.914us 00:07:03.611 99.99900% : 35691.914us 00:07:03.611 99.99990% : 35691.914us 00:07:03.611 99.99999% : 35691.914us 00:07:03.611 00:07:03.611 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:03.611 ================================================================================= 00:07:03.611 1.00000% : 9376.689us 00:07:03.611 10.00000% : 10183.286us 00:07:03.611 25.00000% : 10939.471us 00:07:03.611 50.00000% : 12300.603us 00:07:03.611 75.00000% : 13611.323us 00:07:03.611 90.00000% : 15526.991us 00:07:03.611 95.00000% : 16535.237us 00:07:03.611 98.00000% : 18955.028us 00:07:03.611 99.00000% : 25609.452us 00:07:03.611 99.50000% : 32465.526us 00:07:03.611 99.90000% : 33675.422us 00:07:03.611 99.99000% : 33877.071us 00:07:03.611 99.99900% : 33877.071us 00:07:03.611 99.99990% : 33877.071us 00:07:03.611 99.99999% : 33877.071us 00:07:03.611 00:07:03.611 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:03.611 ================================================================================= 00:07:03.611 1.00000% : 9477.514us 00:07:03.611 10.00000% : 10183.286us 00:07:03.611 25.00000% : 10838.646us 00:07:03.611 50.00000% : 12199.778us 00:07:03.611 75.00000% : 13913.797us 00:07:03.611 90.00000% : 15426.166us 00:07:03.611 95.00000% : 16535.237us 00:07:03.611 98.00000% : 18753.378us 00:07:03.611 99.00000% : 25004.505us 00:07:03.611 99.50000% : 31860.578us 00:07:03.611 99.90000% : 33070.474us 00:07:03.611 99.99000% : 33473.772us 00:07:03.611 99.99900% : 33473.772us 00:07:03.611 99.99990% : 33473.772us 00:07:03.611 99.99999% : 33473.772us 00:07:03.611 00:07:03.611 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:03.611 ================================================================================= 00:07:03.611 1.00000% : 9527.926us 00:07:03.611 10.00000% : 10233.698us 00:07:03.611 25.00000% : 10838.646us 00:07:03.611 50.00000% : 12098.954us 00:07:03.611 75.00000% : 13913.797us 00:07:03.611 90.00000% : 15325.342us 00:07:03.611 95.00000% : 16636.062us 00:07:03.611 98.00000% : 17845.957us 00:07:03.611 99.00000% : 23492.135us 00:07:03.611 99.50000% : 30449.034us 00:07:03.611 99.90000% : 31457.280us 00:07:03.611 99.99000% : 31658.929us 00:07:03.611 99.99900% : 31658.929us 00:07:03.611 99.99990% : 31658.929us 00:07:03.611 99.99999% : 31658.929us 00:07:03.611 00:07:03.611 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:03.611 ================================================================================= 00:07:03.611 1.00000% : 9527.926us 00:07:03.611 10.00000% : 10132.874us 00:07:03.611 25.00000% : 10939.471us 00:07:03.611 50.00000% : 12098.954us 00:07:03.611 75.00000% : 13913.797us 00:07:03.611 90.00000% : 15325.342us 00:07:03.611 95.00000% : 16535.237us 00:07:03.611 98.00000% : 17644.308us 00:07:03.611 99.00000% : 19862.449us 00:07:03.611 99.50000% : 22786.363us 00:07:03.611 99.90000% : 23996.258us 00:07:03.611 99.99000% : 24298.732us 00:07:03.611 99.99900% : 24298.732us 00:07:03.611 99.99990% : 24298.732us 00:07:03.611 99.99999% : 24298.732us 00:07:03.611 00:07:03.611 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:03.611 ============================================================================== 00:07:03.611 Range in us Cumulative IO count 00:07:03.611 9074.215 - 9124.628: 0.0593% ( 6) 00:07:03.611 9124.628 - 9175.040: 0.1286% ( 7) 00:07:03.611 9175.040 - 9225.452: 0.1780% ( 5) 00:07:03.611 9225.452 - 9275.865: 0.2769% ( 10) 00:07:03.611 9275.865 - 9326.277: 0.4252% ( 15) 00:07:03.611 9326.277 - 9376.689: 0.5835% ( 16) 00:07:03.611 9376.689 - 9427.102: 0.8010% ( 22) 00:07:03.611 9427.102 - 9477.514: 1.0779% ( 28) 00:07:03.611 9477.514 - 9527.926: 1.4241% ( 35) 00:07:03.611 9527.926 - 9578.338: 2.0273% ( 61) 00:07:03.611 9578.338 - 9628.751: 2.5712% ( 55) 00:07:03.611 9628.751 - 9679.163: 3.1448% ( 58) 00:07:03.611 9679.163 - 9729.575: 3.7678% ( 63) 00:07:03.611 9729.575 - 9779.988: 4.3513% ( 59) 00:07:03.611 9779.988 - 9830.400: 5.2215% ( 88) 00:07:03.611 9830.400 - 9880.812: 6.0918% ( 88) 00:07:03.611 9880.812 - 9931.225: 6.9324% ( 85) 00:07:03.611 9931.225 - 9981.637: 7.6938% ( 77) 00:07:03.611 9981.637 - 10032.049: 8.5542% ( 87) 00:07:03.611 10032.049 - 10082.462: 9.9090% ( 137) 00:07:03.611 10082.462 - 10132.874: 10.9177% ( 102) 00:07:03.611 10132.874 - 10183.286: 11.7583% ( 85) 00:07:03.611 10183.286 - 10233.698: 12.6088% ( 86) 00:07:03.611 10233.698 - 10284.111: 13.6669% ( 107) 00:07:03.611 10284.111 - 10334.523: 14.7053% ( 105) 00:07:03.611 10334.523 - 10384.935: 15.7239% ( 103) 00:07:03.611 10384.935 - 10435.348: 16.5744% ( 86) 00:07:03.611 10435.348 - 10485.760: 17.4842% ( 92) 00:07:03.611 10485.760 - 10536.172: 18.2852% ( 81) 00:07:03.611 10536.172 - 10586.585: 18.9181% ( 64) 00:07:03.611 10586.585 - 10636.997: 19.6005% ( 69) 00:07:03.611 10636.997 - 10687.409: 20.2828% ( 69) 00:07:03.611 10687.409 - 10737.822: 21.0740% ( 80) 00:07:03.611 10737.822 - 10788.234: 21.8849% ( 82) 00:07:03.611 10788.234 - 10838.646: 22.7354% ( 86) 00:07:03.611 10838.646 - 10889.058: 23.5759% ( 85) 00:07:03.611 10889.058 - 10939.471: 24.3473% ( 78) 00:07:03.611 10939.471 - 10989.883: 25.5044% ( 117) 00:07:03.611 10989.883 - 11040.295: 26.5131% ( 102) 00:07:03.611 11040.295 - 11090.708: 27.4031% ( 90) 00:07:03.611 11090.708 - 11141.120: 28.3030% ( 91) 00:07:03.611 11141.120 - 11191.532: 29.3513% ( 106) 00:07:03.611 11191.532 - 11241.945: 30.3402% ( 100) 00:07:03.611 11241.945 - 11292.357: 31.3390% ( 101) 00:07:03.611 11292.357 - 11342.769: 32.2884% ( 96) 00:07:03.611 11342.769 - 11393.182: 33.3861% ( 111) 00:07:03.611 11393.182 - 11443.594: 34.4047% ( 103) 00:07:03.611 11443.594 - 11494.006: 35.7496% ( 136) 00:07:03.611 11494.006 - 11544.418: 37.0649% ( 133) 00:07:03.611 11544.418 - 11594.831: 38.2812% ( 123) 00:07:03.611 11594.831 - 11645.243: 39.4778% ( 121) 00:07:03.611 11645.243 - 11695.655: 40.6843% ( 122) 00:07:03.611 11695.655 - 11746.068: 41.9600% ( 129) 00:07:03.611 11746.068 - 11796.480: 43.0973% ( 115) 00:07:03.611 11796.480 - 11846.892: 44.4027% ( 132) 00:07:03.611 11846.892 - 11897.305: 45.5894% ( 120) 00:07:03.611 11897.305 - 11947.717: 47.0530% ( 148) 00:07:03.611 11947.717 - 11998.129: 48.2595% ( 122) 00:07:03.611 11998.129 - 12048.542: 49.3968% ( 115) 00:07:03.611 12048.542 - 12098.954: 50.2868% ( 90) 00:07:03.611 12098.954 - 12149.366: 51.2658% ( 99) 00:07:03.611 12149.366 - 12199.778: 52.1163% ( 86) 00:07:03.611 12199.778 - 12250.191: 52.8085% ( 70) 00:07:03.611 12250.191 - 12300.603: 53.5601% ( 76) 00:07:03.611 12300.603 - 12351.015: 54.4106% ( 86) 00:07:03.611 12351.015 - 12401.428: 55.1226% ( 72) 00:07:03.611 12401.428 - 12451.840: 55.7160% ( 60) 00:07:03.611 12451.840 - 12502.252: 56.3192% ( 61) 00:07:03.611 12502.252 - 12552.665: 57.1598% ( 85) 00:07:03.611 12552.665 - 12603.077: 57.8817% ( 73) 00:07:03.611 12603.077 - 12653.489: 58.4256% ( 55) 00:07:03.611 12653.489 - 12703.902: 59.1377% ( 72) 00:07:03.611 12703.902 - 12754.314: 59.7409% ( 61) 00:07:03.611 12754.314 - 12804.726: 60.2255% ( 49) 00:07:03.611 12804.726 - 12855.138: 60.7397% ( 52) 00:07:03.611 12855.138 - 12905.551: 61.1551% ( 42) 00:07:03.611 12905.551 - 13006.375: 62.1440% ( 100) 00:07:03.611 13006.375 - 13107.200: 63.1428% ( 101) 00:07:03.611 13107.200 - 13208.025: 64.4086% ( 128) 00:07:03.611 13208.025 - 13308.849: 66.2381% ( 185) 00:07:03.611 13308.849 - 13409.674: 68.0380% ( 182) 00:07:03.611 13409.674 - 13510.498: 70.0158% ( 200) 00:07:03.611 13510.498 - 13611.323: 72.0036% ( 201) 00:07:03.611 13611.323 - 13712.148: 73.5562% ( 157) 00:07:03.611 13712.148 - 13812.972: 75.2176% ( 168) 00:07:03.611 13812.972 - 13913.797: 76.6614% ( 146) 00:07:03.611 13913.797 - 14014.622: 77.8778% ( 123) 00:07:03.611 14014.622 - 14115.446: 79.3414% ( 148) 00:07:03.612 14115.446 - 14216.271: 80.2611% ( 93) 00:07:03.612 14216.271 - 14317.095: 80.9830% ( 73) 00:07:03.612 14317.095 - 14417.920: 81.9620% ( 99) 00:07:03.612 14417.920 - 14518.745: 83.0004% ( 105) 00:07:03.612 14518.745 - 14619.569: 84.2860% ( 130) 00:07:03.612 14619.569 - 14720.394: 85.5024% ( 123) 00:07:03.612 14720.394 - 14821.218: 86.5605% ( 107) 00:07:03.612 14821.218 - 14922.043: 87.4308% ( 88) 00:07:03.612 14922.043 - 15022.868: 88.0439% ( 62) 00:07:03.612 15022.868 - 15123.692: 88.5285% ( 49) 00:07:03.612 15123.692 - 15224.517: 88.8153% ( 29) 00:07:03.612 15224.517 - 15325.342: 89.0724% ( 26) 00:07:03.612 15325.342 - 15426.166: 89.5669% ( 50) 00:07:03.612 15426.166 - 15526.991: 90.0415% ( 48) 00:07:03.612 15526.991 - 15627.815: 90.6843% ( 65) 00:07:03.612 15627.815 - 15728.640: 91.3074% ( 63) 00:07:03.612 15728.640 - 15829.465: 91.7623% ( 46) 00:07:03.612 15829.465 - 15930.289: 92.5237% ( 77) 00:07:03.612 15930.289 - 16031.114: 93.0578% ( 54) 00:07:03.612 16031.114 - 16131.938: 93.6017% ( 55) 00:07:03.612 16131.938 - 16232.763: 94.1752% ( 58) 00:07:03.612 16232.763 - 16333.588: 94.6796% ( 51) 00:07:03.612 16333.588 - 16434.412: 95.0257% ( 35) 00:07:03.612 16434.412 - 16535.237: 95.2828% ( 26) 00:07:03.612 16535.237 - 16636.062: 95.5103% ( 23) 00:07:03.612 16636.062 - 16736.886: 95.8465% ( 34) 00:07:03.612 16736.886 - 16837.711: 96.1531% ( 31) 00:07:03.612 16837.711 - 16938.535: 96.4399% ( 29) 00:07:03.612 16938.535 - 17039.360: 96.6574% ( 22) 00:07:03.612 17039.360 - 17140.185: 96.8849% ( 23) 00:07:03.612 17140.185 - 17241.009: 97.1025% ( 22) 00:07:03.612 17241.009 - 17341.834: 97.2706% ( 17) 00:07:03.612 17341.834 - 17442.658: 97.3398% ( 7) 00:07:03.612 17442.658 - 17543.483: 97.3794% ( 4) 00:07:03.612 17543.483 - 17644.308: 97.4585% ( 8) 00:07:03.612 17644.308 - 17745.132: 97.5574% ( 10) 00:07:03.612 17745.132 - 17845.957: 97.6365% ( 8) 00:07:03.612 17845.957 - 17946.782: 97.9529% ( 32) 00:07:03.612 17946.782 - 18047.606: 98.0024% ( 5) 00:07:03.612 18047.606 - 18148.431: 98.0518% ( 5) 00:07:03.612 18148.431 - 18249.255: 98.0914% ( 4) 00:07:03.612 18249.255 - 18350.080: 98.1013% ( 1) 00:07:03.612 18753.378 - 18854.203: 98.1210% ( 2) 00:07:03.612 18854.203 - 18955.028: 98.1606% ( 4) 00:07:03.612 18955.028 - 19055.852: 98.2298% ( 7) 00:07:03.612 19055.852 - 19156.677: 98.2595% ( 3) 00:07:03.612 19156.677 - 19257.502: 98.2892% ( 3) 00:07:03.612 19257.502 - 19358.326: 98.3188% ( 3) 00:07:03.612 19358.326 - 19459.151: 98.3485% ( 3) 00:07:03.612 19459.151 - 19559.975: 98.3881% ( 4) 00:07:03.612 19559.975 - 19660.800: 98.4276% ( 4) 00:07:03.612 19660.800 - 19761.625: 98.4573% ( 3) 00:07:03.612 19761.625 - 19862.449: 98.4968% ( 4) 00:07:03.612 19862.449 - 19963.274: 98.5364% ( 4) 00:07:03.612 19963.274 - 20064.098: 98.5759% ( 4) 00:07:03.612 20064.098 - 20164.923: 98.6155% ( 4) 00:07:03.612 20164.923 - 20265.748: 98.6551% ( 4) 00:07:03.612 20265.748 - 20366.572: 98.6946% ( 4) 00:07:03.612 20366.572 - 20467.397: 98.7243% ( 3) 00:07:03.612 20467.397 - 20568.222: 98.7342% ( 1) 00:07:03.612 28029.243 - 28230.892: 98.9122% ( 18) 00:07:03.612 28230.892 - 28432.542: 99.0012% ( 9) 00:07:03.612 28432.542 - 28634.191: 99.0704% ( 7) 00:07:03.612 28634.191 - 28835.840: 99.1495% ( 8) 00:07:03.612 28835.840 - 29037.489: 99.2286% ( 8) 00:07:03.612 29037.489 - 29239.138: 99.3078% ( 8) 00:07:03.612 29239.138 - 29440.788: 99.3671% ( 6) 00:07:03.612 34078.720 - 34280.369: 99.4363% ( 7) 00:07:03.612 34280.369 - 34482.018: 99.4660% ( 3) 00:07:03.612 34482.018 - 34683.668: 99.4858% ( 2) 00:07:03.612 35086.966 - 35288.615: 99.4956% ( 1) 00:07:03.612 35288.615 - 35490.265: 99.5550% ( 6) 00:07:03.612 35490.265 - 35691.914: 99.6242% ( 7) 00:07:03.612 35691.914 - 35893.563: 99.6835% ( 6) 00:07:03.612 35893.563 - 36095.212: 99.7429% ( 6) 00:07:03.612 36095.212 - 36296.862: 99.8022% ( 6) 00:07:03.612 36296.862 - 36498.511: 99.8714% ( 7) 00:07:03.612 36498.511 - 36700.160: 99.9308% ( 6) 00:07:03.612 36700.160 - 36901.809: 99.9901% ( 6) 00:07:03.612 36901.809 - 37103.458: 100.0000% ( 1) 00:07:03.612 00:07:03.612 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:03.612 ============================================================================== 00:07:03.612 Range in us Cumulative IO count 00:07:03.612 8872.566 - 8922.978: 0.0198% ( 2) 00:07:03.612 8922.978 - 8973.391: 0.1286% ( 11) 00:07:03.612 8973.391 - 9023.803: 0.1978% ( 7) 00:07:03.612 9023.803 - 9074.215: 0.3165% ( 12) 00:07:03.612 9074.215 - 9124.628: 0.5241% ( 21) 00:07:03.612 9124.628 - 9175.040: 0.7417% ( 22) 00:07:03.612 9175.040 - 9225.452: 0.9889% ( 25) 00:07:03.612 9225.452 - 9275.865: 1.3350% ( 35) 00:07:03.612 9275.865 - 9326.277: 1.6317% ( 30) 00:07:03.612 9326.277 - 9376.689: 1.9086% ( 28) 00:07:03.612 9376.689 - 9427.102: 2.2449% ( 34) 00:07:03.612 9427.102 - 9477.514: 2.6206% ( 38) 00:07:03.612 9477.514 - 9527.926: 3.0162% ( 40) 00:07:03.612 9527.926 - 9578.338: 3.5898% ( 58) 00:07:03.612 9578.338 - 9628.751: 4.1139% ( 53) 00:07:03.612 9628.751 - 9679.163: 4.4996% ( 39) 00:07:03.612 9679.163 - 9729.575: 4.9842% ( 49) 00:07:03.612 9729.575 - 9779.988: 5.7654% ( 79) 00:07:03.612 9779.988 - 9830.400: 6.6258% ( 87) 00:07:03.612 9830.400 - 9880.812: 7.5059% ( 89) 00:07:03.612 9880.812 - 9931.225: 8.4850% ( 99) 00:07:03.612 9931.225 - 9981.637: 9.2761% ( 80) 00:07:03.612 9981.637 - 10032.049: 9.9090% ( 64) 00:07:03.612 10032.049 - 10082.462: 10.5320% ( 63) 00:07:03.612 10082.462 - 10132.874: 11.2243% ( 70) 00:07:03.612 10132.874 - 10183.286: 11.9165% ( 70) 00:07:03.612 10183.286 - 10233.698: 12.7176% ( 81) 00:07:03.612 10233.698 - 10284.111: 13.3703% ( 66) 00:07:03.612 10284.111 - 10334.523: 14.0032% ( 64) 00:07:03.612 10334.523 - 10384.935: 14.7547% ( 76) 00:07:03.612 10384.935 - 10435.348: 15.3975% ( 65) 00:07:03.612 10435.348 - 10485.760: 16.1392% ( 75) 00:07:03.612 10485.760 - 10536.172: 17.0688% ( 94) 00:07:03.612 10536.172 - 10586.585: 18.0281% ( 97) 00:07:03.612 10586.585 - 10636.997: 19.2346% ( 122) 00:07:03.612 10636.997 - 10687.409: 20.1642% ( 94) 00:07:03.612 10687.409 - 10737.822: 21.2322% ( 108) 00:07:03.612 10737.822 - 10788.234: 22.0629% ( 84) 00:07:03.612 10788.234 - 10838.646: 23.0222% ( 97) 00:07:03.612 10838.646 - 10889.058: 23.8627% ( 85) 00:07:03.612 10889.058 - 10939.471: 24.8813% ( 103) 00:07:03.612 10939.471 - 10989.883: 26.1175% ( 125) 00:07:03.612 10989.883 - 11040.295: 27.0669% ( 96) 00:07:03.612 11040.295 - 11090.708: 27.9074% ( 85) 00:07:03.612 11090.708 - 11141.120: 29.0051% ( 111) 00:07:03.612 11141.120 - 11191.532: 30.0336% ( 104) 00:07:03.612 11191.532 - 11241.945: 30.9237% ( 90) 00:07:03.612 11241.945 - 11292.357: 31.6752% ( 76) 00:07:03.612 11292.357 - 11342.769: 32.6345% ( 97) 00:07:03.612 11342.769 - 11393.182: 33.4751% ( 85) 00:07:03.612 11393.182 - 11443.594: 34.6025% ( 114) 00:07:03.612 11443.594 - 11494.006: 35.6606% ( 107) 00:07:03.612 11494.006 - 11544.418: 36.6199% ( 97) 00:07:03.612 11544.418 - 11594.831: 37.9055% ( 130) 00:07:03.612 11594.831 - 11645.243: 38.9438% ( 105) 00:07:03.612 11645.243 - 11695.655: 40.0613% ( 113) 00:07:03.612 11695.655 - 11746.068: 40.9019% ( 85) 00:07:03.612 11746.068 - 11796.480: 41.7722% ( 88) 00:07:03.612 11796.480 - 11846.892: 42.5633% ( 80) 00:07:03.612 11846.892 - 11897.305: 43.4335% ( 88) 00:07:03.612 11897.305 - 11947.717: 44.4422% ( 102) 00:07:03.612 11947.717 - 11998.129: 45.3323% ( 90) 00:07:03.612 11998.129 - 12048.542: 46.2421% ( 92) 00:07:03.612 12048.542 - 12098.954: 47.1420% ( 91) 00:07:03.612 12098.954 - 12149.366: 47.9233% ( 79) 00:07:03.612 12149.366 - 12199.778: 48.8034% ( 89) 00:07:03.612 12199.778 - 12250.191: 49.8220% ( 103) 00:07:03.612 12250.191 - 12300.603: 50.8406% ( 103) 00:07:03.612 12300.603 - 12351.015: 51.8790% ( 105) 00:07:03.612 12351.015 - 12401.428: 52.9272% ( 106) 00:07:03.612 12401.428 - 12451.840: 54.0645% ( 115) 00:07:03.612 12451.840 - 12502.252: 55.0435% ( 99) 00:07:03.612 12502.252 - 12552.665: 55.8347% ( 80) 00:07:03.612 12552.665 - 12603.077: 56.6950% ( 87) 00:07:03.612 12603.077 - 12653.489: 57.4070% ( 72) 00:07:03.612 12653.489 - 12703.902: 58.0400% ( 64) 00:07:03.612 12703.902 - 12754.314: 58.7223% ( 69) 00:07:03.612 12754.314 - 12804.726: 59.4541% ( 74) 00:07:03.612 12804.726 - 12855.138: 60.1760% ( 73) 00:07:03.612 12855.138 - 12905.551: 60.8782% ( 71) 00:07:03.612 12905.551 - 13006.375: 62.4308% ( 157) 00:07:03.612 13006.375 - 13107.200: 64.2207% ( 181) 00:07:03.612 13107.200 - 13208.025: 66.2678% ( 207) 00:07:03.612 13208.025 - 13308.849: 68.1270% ( 188) 00:07:03.612 13308.849 - 13409.674: 69.8081% ( 170) 00:07:03.612 13409.674 - 13510.498: 71.3904% ( 160) 00:07:03.612 13510.498 - 13611.323: 73.2793% ( 191) 00:07:03.612 13611.323 - 13712.148: 74.6835% ( 142) 00:07:03.612 13712.148 - 13812.972: 75.9098% ( 124) 00:07:03.612 13812.972 - 13913.797: 77.1855% ( 129) 00:07:03.612 13913.797 - 14014.622: 78.2832% ( 111) 00:07:03.612 14014.622 - 14115.446: 79.2722% ( 100) 00:07:03.612 14115.446 - 14216.271: 80.2611% ( 100) 00:07:03.612 14216.271 - 14317.095: 81.3687% ( 112) 00:07:03.612 14317.095 - 14417.920: 82.3081% ( 95) 00:07:03.612 14417.920 - 14518.745: 83.4652% ( 117) 00:07:03.612 14518.745 - 14619.569: 84.5629% ( 111) 00:07:03.612 14619.569 - 14720.394: 85.2848% ( 73) 00:07:03.612 14720.394 - 14821.218: 86.0067% ( 73) 00:07:03.612 14821.218 - 14922.043: 86.5605% ( 56) 00:07:03.612 14922.043 - 15022.868: 87.2627% ( 71) 00:07:03.612 15022.868 - 15123.692: 87.8956% ( 64) 00:07:03.612 15123.692 - 15224.517: 88.5977% ( 71) 00:07:03.613 15224.517 - 15325.342: 89.1317% ( 54) 00:07:03.613 15325.342 - 15426.166: 89.6559% ( 53) 00:07:03.613 15426.166 - 15526.991: 90.2591% ( 61) 00:07:03.613 15526.991 - 15627.815: 90.9019% ( 65) 00:07:03.613 15627.815 - 15728.640: 91.6634% ( 77) 00:07:03.613 15728.640 - 15829.465: 92.3952% ( 74) 00:07:03.613 15829.465 - 15930.289: 92.8402% ( 45) 00:07:03.613 15930.289 - 16031.114: 93.1962% ( 36) 00:07:03.613 16031.114 - 16131.938: 93.6709% ( 48) 00:07:03.613 16131.938 - 16232.763: 94.0862% ( 42) 00:07:03.613 16232.763 - 16333.588: 94.5312% ( 45) 00:07:03.613 16333.588 - 16434.412: 95.0059% ( 48) 00:07:03.613 16434.412 - 16535.237: 95.3817% ( 38) 00:07:03.613 16535.237 - 16636.062: 95.6982% ( 32) 00:07:03.613 16636.062 - 16736.886: 96.0146% ( 32) 00:07:03.613 16736.886 - 16837.711: 96.3311% ( 32) 00:07:03.613 16837.711 - 16938.535: 96.6179% ( 29) 00:07:03.613 16938.535 - 17039.360: 96.8157% ( 20) 00:07:03.613 17039.360 - 17140.185: 96.9343% ( 12) 00:07:03.613 17140.185 - 17241.009: 97.0332% ( 10) 00:07:03.613 17241.009 - 17341.834: 97.1519% ( 12) 00:07:03.613 17341.834 - 17442.658: 97.3002% ( 15) 00:07:03.613 17442.658 - 17543.483: 97.3398% ( 4) 00:07:03.613 17543.483 - 17644.308: 97.3991% ( 6) 00:07:03.613 17644.308 - 17745.132: 97.4881% ( 9) 00:07:03.613 17745.132 - 17845.957: 97.5277% ( 4) 00:07:03.613 17845.957 - 17946.782: 97.5771% ( 5) 00:07:03.613 17946.782 - 18047.606: 97.6859% ( 11) 00:07:03.613 18047.606 - 18148.431: 97.7255% ( 4) 00:07:03.613 18148.431 - 18249.255: 97.7848% ( 6) 00:07:03.613 18249.255 - 18350.080: 97.8639% ( 8) 00:07:03.613 18350.080 - 18450.905: 97.9727% ( 11) 00:07:03.613 18450.905 - 18551.729: 98.0716% ( 10) 00:07:03.613 18551.729 - 18652.554: 98.1903% ( 12) 00:07:03.613 18652.554 - 18753.378: 98.2595% ( 7) 00:07:03.613 18753.378 - 18854.203: 98.2991% ( 4) 00:07:03.613 18854.203 - 18955.028: 98.3188% ( 2) 00:07:03.613 18955.028 - 19055.852: 98.3485% ( 3) 00:07:03.613 19055.852 - 19156.677: 98.4177% ( 7) 00:07:03.613 19156.677 - 19257.502: 98.4869% ( 7) 00:07:03.613 19257.502 - 19358.326: 98.5166% ( 3) 00:07:03.613 19358.326 - 19459.151: 98.5463% ( 3) 00:07:03.613 19459.151 - 19559.975: 98.5858% ( 4) 00:07:03.613 19559.975 - 19660.800: 98.6155% ( 3) 00:07:03.613 19660.800 - 19761.625: 98.6353% ( 2) 00:07:03.613 19761.625 - 19862.449: 98.6650% ( 3) 00:07:03.613 19862.449 - 19963.274: 98.6847% ( 2) 00:07:03.613 19963.274 - 20064.098: 98.7243% ( 4) 00:07:03.613 20064.098 - 20164.923: 98.7342% ( 1) 00:07:03.613 26214.400 - 26416.049: 98.7836% ( 5) 00:07:03.613 26416.049 - 26617.698: 98.8528% ( 7) 00:07:03.613 26617.698 - 26819.348: 98.9122% ( 6) 00:07:03.613 26819.348 - 27020.997: 98.9814% ( 7) 00:07:03.613 27020.997 - 27222.646: 99.0309% ( 5) 00:07:03.613 27222.646 - 27424.295: 99.1001% ( 7) 00:07:03.613 27424.295 - 27625.945: 99.1495% ( 5) 00:07:03.613 27625.945 - 27827.594: 99.1990% ( 5) 00:07:03.613 27827.594 - 28029.243: 99.2583% ( 6) 00:07:03.613 28029.243 - 28230.892: 99.3374% ( 8) 00:07:03.613 28230.892 - 28432.542: 99.3671% ( 3) 00:07:03.613 33473.772 - 33675.422: 99.4363% ( 7) 00:07:03.613 33675.422 - 33877.071: 99.4956% ( 6) 00:07:03.613 33877.071 - 34078.720: 99.5451% ( 5) 00:07:03.613 34078.720 - 34280.369: 99.6143% ( 7) 00:07:03.613 34280.369 - 34482.018: 99.6737% ( 6) 00:07:03.613 34482.018 - 34683.668: 99.7330% ( 6) 00:07:03.613 34683.668 - 34885.317: 99.7923% ( 6) 00:07:03.613 34885.317 - 35086.966: 99.8418% ( 5) 00:07:03.613 35086.966 - 35288.615: 99.9011% ( 6) 00:07:03.613 35288.615 - 35490.265: 99.9703% ( 7) 00:07:03.613 35490.265 - 35691.914: 100.0000% ( 3) 00:07:03.613 00:07:03.613 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:03.613 ============================================================================== 00:07:03.613 Range in us Cumulative IO count 00:07:03.613 8872.566 - 8922.978: 0.0099% ( 1) 00:07:03.613 9074.215 - 9124.628: 0.0791% ( 7) 00:07:03.613 9124.628 - 9175.040: 0.2373% ( 16) 00:07:03.613 9175.040 - 9225.452: 0.3857% ( 15) 00:07:03.613 9225.452 - 9275.865: 0.5835% ( 20) 00:07:03.613 9275.865 - 9326.277: 0.8109% ( 23) 00:07:03.613 9326.277 - 9376.689: 1.3054% ( 50) 00:07:03.613 9376.689 - 9427.102: 1.7207% ( 42) 00:07:03.613 9427.102 - 9477.514: 2.2745% ( 56) 00:07:03.613 9477.514 - 9527.926: 2.6701% ( 40) 00:07:03.613 9527.926 - 9578.338: 3.0360% ( 37) 00:07:03.613 9578.338 - 9628.751: 3.4909% ( 46) 00:07:03.613 9628.751 - 9679.163: 3.8667% ( 38) 00:07:03.613 9679.163 - 9729.575: 4.2524% ( 39) 00:07:03.613 9729.575 - 9779.988: 4.6578% ( 41) 00:07:03.613 9779.988 - 9830.400: 5.2314% ( 58) 00:07:03.613 9830.400 - 9880.812: 5.8347% ( 61) 00:07:03.613 9880.812 - 9931.225: 6.4873% ( 66) 00:07:03.613 9931.225 - 9981.637: 7.1400% ( 66) 00:07:03.613 9981.637 - 10032.049: 8.0004% ( 87) 00:07:03.613 10032.049 - 10082.462: 8.7718% ( 78) 00:07:03.613 10082.462 - 10132.874: 9.7112% ( 95) 00:07:03.613 10132.874 - 10183.286: 10.7199% ( 102) 00:07:03.613 10183.286 - 10233.698: 11.6594% ( 95) 00:07:03.613 10233.698 - 10284.111: 12.5890% ( 94) 00:07:03.613 10284.111 - 10334.523: 13.6373% ( 106) 00:07:03.613 10334.523 - 10384.935: 14.7646% ( 114) 00:07:03.613 10384.935 - 10435.348: 15.8623% ( 111) 00:07:03.613 10435.348 - 10485.760: 17.1381% ( 129) 00:07:03.613 10485.760 - 10536.172: 18.1962% ( 107) 00:07:03.613 10536.172 - 10586.585: 19.1555% ( 97) 00:07:03.613 10586.585 - 10636.997: 20.0850% ( 94) 00:07:03.613 10636.997 - 10687.409: 21.1135% ( 104) 00:07:03.613 10687.409 - 10737.822: 21.9343% ( 83) 00:07:03.613 10737.822 - 10788.234: 22.8046% ( 88) 00:07:03.613 10788.234 - 10838.646: 23.5759% ( 78) 00:07:03.613 10838.646 - 10889.058: 24.3869% ( 82) 00:07:03.613 10889.058 - 10939.471: 25.2275% ( 85) 00:07:03.613 10939.471 - 10989.883: 26.1669% ( 95) 00:07:03.613 10989.883 - 11040.295: 27.0866% ( 93) 00:07:03.613 11040.295 - 11090.708: 28.4217% ( 135) 00:07:03.613 11090.708 - 11141.120: 29.4996% ( 109) 00:07:03.613 11141.120 - 11191.532: 30.4984% ( 101) 00:07:03.613 11191.532 - 11241.945: 31.6950% ( 121) 00:07:03.613 11241.945 - 11292.357: 32.6147% ( 93) 00:07:03.613 11292.357 - 11342.769: 33.6630% ( 106) 00:07:03.613 11342.769 - 11393.182: 34.8101% ( 116) 00:07:03.613 11393.182 - 11443.594: 35.8584% ( 106) 00:07:03.613 11443.594 - 11494.006: 36.7286% ( 88) 00:07:03.613 11494.006 - 11544.418: 37.5297% ( 81) 00:07:03.613 11544.418 - 11594.831: 38.2812% ( 76) 00:07:03.613 11594.831 - 11645.243: 39.0131% ( 74) 00:07:03.613 11645.243 - 11695.655: 39.6657% ( 66) 00:07:03.613 11695.655 - 11746.068: 40.4371% ( 78) 00:07:03.613 11746.068 - 11796.480: 41.0799% ( 65) 00:07:03.613 11796.480 - 11846.892: 41.8809% ( 81) 00:07:03.613 11846.892 - 11897.305: 42.7413% ( 87) 00:07:03.613 11897.305 - 11947.717: 43.6116% ( 88) 00:07:03.613 11947.717 - 11998.129: 44.5312% ( 93) 00:07:03.613 11998.129 - 12048.542: 45.4114% ( 89) 00:07:03.613 12048.542 - 12098.954: 46.5289% ( 113) 00:07:03.613 12098.954 - 12149.366: 47.4486% ( 93) 00:07:03.613 12149.366 - 12199.778: 48.4078% ( 97) 00:07:03.613 12199.778 - 12250.191: 49.2484% ( 85) 00:07:03.613 12250.191 - 12300.603: 50.1780% ( 94) 00:07:03.613 12300.603 - 12351.015: 51.1966% ( 103) 00:07:03.613 12351.015 - 12401.428: 52.0767% ( 89) 00:07:03.613 12401.428 - 12451.840: 52.8877% ( 82) 00:07:03.613 12451.840 - 12502.252: 53.7678% ( 89) 00:07:03.613 12502.252 - 12552.665: 54.6875% ( 93) 00:07:03.613 12552.665 - 12603.077: 55.7753% ( 110) 00:07:03.613 12603.077 - 12653.489: 56.7148% ( 95) 00:07:03.613 12653.489 - 12703.902: 57.7136% ( 101) 00:07:03.613 12703.902 - 12754.314: 58.6036% ( 90) 00:07:03.613 12754.314 - 12804.726: 59.4343% ( 84) 00:07:03.613 12804.726 - 12855.138: 60.2848% ( 86) 00:07:03.613 12855.138 - 12905.551: 61.2045% ( 93) 00:07:03.613 12905.551 - 13006.375: 63.6274% ( 245) 00:07:03.613 13006.375 - 13107.200: 65.8129% ( 221) 00:07:03.613 13107.200 - 13208.025: 68.0775% ( 229) 00:07:03.613 13208.025 - 13308.849: 70.3323% ( 228) 00:07:03.613 13308.849 - 13409.674: 72.2013% ( 189) 00:07:03.613 13409.674 - 13510.498: 73.7935% ( 161) 00:07:03.613 13510.498 - 13611.323: 75.0494% ( 127) 00:07:03.613 13611.323 - 13712.148: 76.2658% ( 123) 00:07:03.613 13712.148 - 13812.972: 77.3141% ( 106) 00:07:03.613 13812.972 - 13913.797: 78.5107% ( 121) 00:07:03.613 13913.797 - 14014.622: 79.4798% ( 98) 00:07:03.613 14014.622 - 14115.446: 80.2809% ( 81) 00:07:03.613 14115.446 - 14216.271: 81.0522% ( 78) 00:07:03.613 14216.271 - 14317.095: 81.6357% ( 59) 00:07:03.613 14317.095 - 14417.920: 82.0906% ( 46) 00:07:03.613 14417.920 - 14518.745: 82.5059% ( 42) 00:07:03.613 14518.745 - 14619.569: 83.0993% ( 60) 00:07:03.613 14619.569 - 14720.394: 83.6135% ( 52) 00:07:03.613 14720.394 - 14821.218: 84.3157% ( 71) 00:07:03.613 14821.218 - 14922.043: 85.2255% ( 92) 00:07:03.613 14922.043 - 15022.868: 86.4320% ( 122) 00:07:03.613 15022.868 - 15123.692: 87.4308% ( 101) 00:07:03.613 15123.692 - 15224.517: 88.2417% ( 82) 00:07:03.613 15224.517 - 15325.342: 89.2009% ( 97) 00:07:03.613 15325.342 - 15426.166: 89.9921% ( 80) 00:07:03.613 15426.166 - 15526.991: 90.6250% ( 64) 00:07:03.613 15526.991 - 15627.815: 91.2282% ( 61) 00:07:03.613 15627.815 - 15728.640: 91.8908% ( 67) 00:07:03.613 15728.640 - 15829.465: 92.4051% ( 52) 00:07:03.613 15829.465 - 15930.289: 92.9589% ( 56) 00:07:03.613 15930.289 - 16031.114: 93.4039% ( 45) 00:07:03.613 16031.114 - 16131.938: 93.8489% ( 45) 00:07:03.613 16131.938 - 16232.763: 94.2840% ( 44) 00:07:03.613 16232.763 - 16333.588: 94.5807% ( 30) 00:07:03.613 16333.588 - 16434.412: 94.9466% ( 37) 00:07:03.613 16434.412 - 16535.237: 95.3521% ( 41) 00:07:03.613 16535.237 - 16636.062: 95.6784% ( 33) 00:07:03.613 16636.062 - 16736.886: 96.0245% ( 35) 00:07:03.614 16736.886 - 16837.711: 96.3509% ( 33) 00:07:03.614 16837.711 - 16938.535: 96.6278% ( 28) 00:07:03.614 16938.535 - 17039.360: 96.8552% ( 23) 00:07:03.614 17039.360 - 17140.185: 97.0629% ( 21) 00:07:03.614 17140.185 - 17241.009: 97.2112% ( 15) 00:07:03.614 17241.009 - 17341.834: 97.3398% ( 13) 00:07:03.614 17341.834 - 17442.658: 97.4189% ( 8) 00:07:03.614 17442.658 - 17543.483: 97.4486% ( 3) 00:07:03.614 17543.483 - 17644.308: 97.4684% ( 2) 00:07:03.614 17845.957 - 17946.782: 97.5079% ( 4) 00:07:03.614 17946.782 - 18047.606: 97.5672% ( 6) 00:07:03.614 18047.606 - 18148.431: 97.6167% ( 5) 00:07:03.614 18148.431 - 18249.255: 97.6464% ( 3) 00:07:03.614 18249.255 - 18350.080: 97.6760% ( 3) 00:07:03.614 18350.080 - 18450.905: 97.7057% ( 3) 00:07:03.614 18450.905 - 18551.729: 97.7453% ( 4) 00:07:03.614 18551.729 - 18652.554: 97.7749% ( 3) 00:07:03.614 18652.554 - 18753.378: 97.8738% ( 10) 00:07:03.614 18753.378 - 18854.203: 97.9628% ( 9) 00:07:03.614 18854.203 - 18955.028: 98.0320% ( 7) 00:07:03.614 18955.028 - 19055.852: 98.1112% ( 8) 00:07:03.614 19055.852 - 19156.677: 98.1903% ( 8) 00:07:03.614 19156.677 - 19257.502: 98.2793% ( 9) 00:07:03.614 19257.502 - 19358.326: 98.3485% ( 7) 00:07:03.614 19358.326 - 19459.151: 98.4276% ( 8) 00:07:03.614 19459.151 - 19559.975: 98.4968% ( 7) 00:07:03.614 19559.975 - 19660.800: 98.5562% ( 6) 00:07:03.614 19660.800 - 19761.625: 98.5957% ( 4) 00:07:03.614 19761.625 - 19862.449: 98.6452% ( 5) 00:07:03.614 19862.449 - 19963.274: 98.6946% ( 5) 00:07:03.614 19963.274 - 20064.098: 98.7342% ( 4) 00:07:03.614 24702.031 - 24802.855: 98.7441% ( 1) 00:07:03.614 24802.855 - 24903.680: 98.7737% ( 3) 00:07:03.614 24903.680 - 25004.505: 98.8133% ( 4) 00:07:03.614 25004.505 - 25105.329: 98.8430% ( 3) 00:07:03.614 25105.329 - 25206.154: 98.8726% ( 3) 00:07:03.614 25206.154 - 25306.978: 98.9023% ( 3) 00:07:03.614 25306.978 - 25407.803: 98.9419% ( 4) 00:07:03.614 25407.803 - 25508.628: 98.9715% ( 3) 00:07:03.614 25508.628 - 25609.452: 99.0012% ( 3) 00:07:03.614 25609.452 - 25710.277: 99.0407% ( 4) 00:07:03.614 25710.277 - 25811.102: 99.0803% ( 4) 00:07:03.614 25811.102 - 26012.751: 99.1396% ( 6) 00:07:03.614 26012.751 - 26214.400: 99.2089% ( 7) 00:07:03.614 26214.400 - 26416.049: 99.2682% ( 6) 00:07:03.614 26416.049 - 26617.698: 99.3275% ( 6) 00:07:03.614 26617.698 - 26819.348: 99.3671% ( 4) 00:07:03.614 31860.578 - 32062.228: 99.3869% ( 2) 00:07:03.614 32062.228 - 32263.877: 99.4462% ( 6) 00:07:03.614 32263.877 - 32465.526: 99.5154% ( 7) 00:07:03.614 32465.526 - 32667.175: 99.5847% ( 7) 00:07:03.614 32667.175 - 32868.825: 99.6539% ( 7) 00:07:03.614 32868.825 - 33070.474: 99.7231% ( 7) 00:07:03.614 33070.474 - 33272.123: 99.7923% ( 7) 00:07:03.614 33272.123 - 33473.772: 99.8616% ( 7) 00:07:03.614 33473.772 - 33675.422: 99.9308% ( 7) 00:07:03.614 33675.422 - 33877.071: 100.0000% ( 7) 00:07:03.614 00:07:03.614 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:03.614 ============================================================================== 00:07:03.614 Range in us Cumulative IO count 00:07:03.614 9023.803 - 9074.215: 0.0099% ( 1) 00:07:03.614 9074.215 - 9124.628: 0.0198% ( 1) 00:07:03.614 9124.628 - 9175.040: 0.0791% ( 6) 00:07:03.614 9175.040 - 9225.452: 0.1483% ( 7) 00:07:03.614 9225.452 - 9275.865: 0.2077% ( 6) 00:07:03.614 9275.865 - 9326.277: 0.3362% ( 13) 00:07:03.614 9326.277 - 9376.689: 0.4747% ( 14) 00:07:03.614 9376.689 - 9427.102: 0.7911% ( 32) 00:07:03.614 9427.102 - 9477.514: 1.1373% ( 35) 00:07:03.614 9477.514 - 9527.926: 1.5229% ( 39) 00:07:03.614 9527.926 - 9578.338: 1.8592% ( 34) 00:07:03.614 9578.338 - 9628.751: 2.1954% ( 34) 00:07:03.614 9628.751 - 9679.163: 2.5316% ( 34) 00:07:03.614 9679.163 - 9729.575: 2.8877% ( 36) 00:07:03.614 9729.575 - 9779.988: 3.3228% ( 44) 00:07:03.614 9779.988 - 9830.400: 4.0447% ( 73) 00:07:03.614 9830.400 - 9880.812: 4.7073% ( 67) 00:07:03.614 9880.812 - 9931.225: 5.6566% ( 96) 00:07:03.614 9931.225 - 9981.637: 6.5862% ( 94) 00:07:03.614 9981.637 - 10032.049: 7.5653% ( 99) 00:07:03.614 10032.049 - 10082.462: 8.5245% ( 97) 00:07:03.614 10082.462 - 10132.874: 9.8794% ( 137) 00:07:03.614 10132.874 - 10183.286: 11.0463% ( 118) 00:07:03.614 10183.286 - 10233.698: 12.3714% ( 134) 00:07:03.614 10233.698 - 10284.111: 13.7559% ( 140) 00:07:03.614 10284.111 - 10334.523: 15.1701% ( 143) 00:07:03.614 10334.523 - 10384.935: 16.5348% ( 138) 00:07:03.614 10384.935 - 10435.348: 17.6820% ( 116) 00:07:03.614 10435.348 - 10485.760: 18.7599% ( 109) 00:07:03.614 10485.760 - 10536.172: 19.7587% ( 101) 00:07:03.614 10536.172 - 10586.585: 20.6388% ( 89) 00:07:03.614 10586.585 - 10636.997: 21.6970% ( 107) 00:07:03.614 10636.997 - 10687.409: 22.7354% ( 105) 00:07:03.614 10687.409 - 10737.822: 23.6650% ( 94) 00:07:03.614 10737.822 - 10788.234: 24.8121% ( 116) 00:07:03.614 10788.234 - 10838.646: 25.7417% ( 94) 00:07:03.614 10838.646 - 10889.058: 26.5526% ( 82) 00:07:03.614 10889.058 - 10939.471: 27.3042% ( 76) 00:07:03.614 10939.471 - 10989.883: 28.2634% ( 97) 00:07:03.614 10989.883 - 11040.295: 29.1733% ( 92) 00:07:03.614 11040.295 - 11090.708: 30.0336% ( 87) 00:07:03.614 11090.708 - 11141.120: 31.0522% ( 103) 00:07:03.614 11141.120 - 11191.532: 32.0510% ( 101) 00:07:03.614 11191.532 - 11241.945: 32.8718% ( 83) 00:07:03.614 11241.945 - 11292.357: 33.7025% ( 84) 00:07:03.614 11292.357 - 11342.769: 34.2959% ( 60) 00:07:03.614 11342.769 - 11393.182: 35.0771% ( 79) 00:07:03.614 11393.182 - 11443.594: 35.9869% ( 92) 00:07:03.614 11443.594 - 11494.006: 36.9165% ( 94) 00:07:03.614 11494.006 - 11544.418: 37.9252% ( 102) 00:07:03.614 11544.418 - 11594.831: 38.8449% ( 93) 00:07:03.614 11594.831 - 11645.243: 39.6954% ( 86) 00:07:03.614 11645.243 - 11695.655: 40.5854% ( 90) 00:07:03.614 11695.655 - 11746.068: 41.5249% ( 95) 00:07:03.614 11746.068 - 11796.480: 42.9094% ( 140) 00:07:03.614 11796.480 - 11846.892: 44.0566% ( 116) 00:07:03.614 11846.892 - 11897.305: 45.0455% ( 100) 00:07:03.614 11897.305 - 11947.717: 46.1531% ( 112) 00:07:03.614 11947.717 - 11998.129: 47.0233% ( 88) 00:07:03.614 11998.129 - 12048.542: 47.8145% ( 80) 00:07:03.614 12048.542 - 12098.954: 48.6650% ( 86) 00:07:03.614 12098.954 - 12149.366: 49.7627% ( 111) 00:07:03.614 12149.366 - 12199.778: 50.6527% ( 90) 00:07:03.614 12199.778 - 12250.191: 51.5625% ( 92) 00:07:03.614 12250.191 - 12300.603: 52.1855% ( 63) 00:07:03.614 12300.603 - 12351.015: 53.0063% ( 83) 00:07:03.614 12351.015 - 12401.428: 53.9557% ( 96) 00:07:03.614 12401.428 - 12451.840: 54.6974% ( 75) 00:07:03.614 12451.840 - 12502.252: 55.3204% ( 63) 00:07:03.614 12502.252 - 12552.665: 56.0127% ( 70) 00:07:03.614 12552.665 - 12603.077: 56.8829% ( 88) 00:07:03.614 12603.077 - 12653.489: 57.6444% ( 77) 00:07:03.614 12653.489 - 12703.902: 58.4256% ( 79) 00:07:03.614 12703.902 - 12754.314: 59.1179% ( 70) 00:07:03.614 12754.314 - 12804.726: 59.8892% ( 78) 00:07:03.614 12804.726 - 12855.138: 60.8188% ( 94) 00:07:03.614 12855.138 - 12905.551: 61.6594% ( 85) 00:07:03.614 12905.551 - 13006.375: 63.2318% ( 159) 00:07:03.614 13006.375 - 13107.200: 64.5273% ( 131) 00:07:03.614 13107.200 - 13208.025: 66.0305% ( 152) 00:07:03.614 13208.025 - 13308.849: 67.8006% ( 179) 00:07:03.614 13308.849 - 13409.674: 69.2346% ( 145) 00:07:03.614 13409.674 - 13510.498: 70.5597% ( 134) 00:07:03.614 13510.498 - 13611.323: 71.7860% ( 124) 00:07:03.614 13611.323 - 13712.148: 73.1606% ( 139) 00:07:03.614 13712.148 - 13812.972: 74.7231% ( 158) 00:07:03.614 13812.972 - 13913.797: 76.4043% ( 170) 00:07:03.614 13913.797 - 14014.622: 77.7492% ( 136) 00:07:03.614 14014.622 - 14115.446: 79.1436% ( 141) 00:07:03.614 14115.446 - 14216.271: 80.4786% ( 135) 00:07:03.614 14216.271 - 14317.095: 81.8236% ( 136) 00:07:03.614 14317.095 - 14417.920: 82.9905% ( 118) 00:07:03.614 14417.920 - 14518.745: 83.8706% ( 89) 00:07:03.614 14518.745 - 14619.569: 84.9189% ( 106) 00:07:03.614 14619.569 - 14720.394: 85.6013% ( 69) 00:07:03.614 14720.394 - 14821.218: 86.2342% ( 64) 00:07:03.614 14821.218 - 14922.043: 87.0253% ( 80) 00:07:03.614 14922.043 - 15022.868: 87.8758% ( 86) 00:07:03.614 15022.868 - 15123.692: 88.5087% ( 64) 00:07:03.614 15123.692 - 15224.517: 89.2702% ( 77) 00:07:03.614 15224.517 - 15325.342: 89.8833% ( 62) 00:07:03.614 15325.342 - 15426.166: 90.4371% ( 56) 00:07:03.614 15426.166 - 15526.991: 90.8821% ( 45) 00:07:03.614 15526.991 - 15627.815: 91.3667% ( 49) 00:07:03.614 15627.815 - 15728.640: 91.9007% ( 54) 00:07:03.614 15728.640 - 15829.465: 92.3655% ( 47) 00:07:03.614 15829.465 - 15930.289: 92.7907% ( 43) 00:07:03.614 15930.289 - 16031.114: 93.5225% ( 74) 00:07:03.614 16031.114 - 16131.938: 93.9082% ( 39) 00:07:03.614 16131.938 - 16232.763: 94.2445% ( 34) 00:07:03.614 16232.763 - 16333.588: 94.5708% ( 33) 00:07:03.614 16333.588 - 16434.412: 94.8972% ( 33) 00:07:03.614 16434.412 - 16535.237: 95.2729% ( 38) 00:07:03.614 16535.237 - 16636.062: 95.6290% ( 36) 00:07:03.614 16636.062 - 16736.886: 95.9355% ( 31) 00:07:03.614 16736.886 - 16837.711: 96.1432% ( 21) 00:07:03.614 16837.711 - 16938.535: 96.3706% ( 23) 00:07:03.614 16938.535 - 17039.360: 96.6080% ( 24) 00:07:03.614 17039.360 - 17140.185: 96.7860% ( 18) 00:07:03.614 17140.185 - 17241.009: 96.9047% ( 12) 00:07:03.614 17241.009 - 17341.834: 97.0332% ( 13) 00:07:03.614 17341.834 - 17442.658: 97.1915% ( 16) 00:07:03.614 17442.658 - 17543.483: 97.2706% ( 8) 00:07:03.614 17543.483 - 17644.308: 97.4090% ( 14) 00:07:03.614 17644.308 - 17745.132: 97.4881% ( 8) 00:07:03.614 17745.132 - 17845.957: 97.5771% ( 9) 00:07:03.614 17845.957 - 17946.782: 97.6365% ( 6) 00:07:03.614 17946.782 - 18047.606: 97.7255% ( 9) 00:07:03.615 18047.606 - 18148.431: 97.7848% ( 6) 00:07:03.615 18148.431 - 18249.255: 97.8145% ( 3) 00:07:03.615 18249.255 - 18350.080: 97.8540% ( 4) 00:07:03.615 18350.080 - 18450.905: 97.8837% ( 3) 00:07:03.615 18450.905 - 18551.729: 97.9233% ( 4) 00:07:03.615 18551.729 - 18652.554: 97.9628% ( 4) 00:07:03.615 18652.554 - 18753.378: 98.0024% ( 4) 00:07:03.615 18753.378 - 18854.203: 98.0419% ( 4) 00:07:03.615 18854.203 - 18955.028: 98.0815% ( 4) 00:07:03.615 18955.028 - 19055.852: 98.1013% ( 2) 00:07:03.615 19660.800 - 19761.625: 98.1309% ( 3) 00:07:03.615 19761.625 - 19862.449: 98.1804% ( 5) 00:07:03.615 19862.449 - 19963.274: 98.2199% ( 4) 00:07:03.615 19963.274 - 20064.098: 98.2595% ( 4) 00:07:03.615 20064.098 - 20164.923: 98.2892% ( 3) 00:07:03.615 20164.923 - 20265.748: 98.3287% ( 4) 00:07:03.615 20265.748 - 20366.572: 98.3584% ( 3) 00:07:03.615 20366.572 - 20467.397: 98.3979% ( 4) 00:07:03.615 20467.397 - 20568.222: 98.4276% ( 3) 00:07:03.615 20568.222 - 20669.046: 98.4672% ( 4) 00:07:03.615 20669.046 - 20769.871: 98.5067% ( 4) 00:07:03.615 20769.871 - 20870.695: 98.5463% ( 4) 00:07:03.615 20870.695 - 20971.520: 98.5858% ( 4) 00:07:03.615 20971.520 - 21072.345: 98.6155% ( 3) 00:07:03.615 21072.345 - 21173.169: 98.6353% ( 2) 00:07:03.615 21173.169 - 21273.994: 98.6748% ( 4) 00:07:03.615 21273.994 - 21374.818: 98.7144% ( 4) 00:07:03.615 21374.818 - 21475.643: 98.7342% ( 2) 00:07:03.615 24097.083 - 24197.908: 98.7441% ( 1) 00:07:03.615 24197.908 - 24298.732: 98.7836% ( 4) 00:07:03.615 24298.732 - 24399.557: 98.8133% ( 3) 00:07:03.615 24399.557 - 24500.382: 98.8430% ( 3) 00:07:03.615 24500.382 - 24601.206: 98.8825% ( 4) 00:07:03.615 24601.206 - 24702.031: 98.9221% ( 4) 00:07:03.615 24702.031 - 24802.855: 98.9517% ( 3) 00:07:03.615 24802.855 - 24903.680: 98.9814% ( 3) 00:07:03.615 24903.680 - 25004.505: 99.0111% ( 3) 00:07:03.615 25004.505 - 25105.329: 99.0407% ( 3) 00:07:03.615 25105.329 - 25206.154: 99.0803% ( 4) 00:07:03.615 25206.154 - 25306.978: 99.1100% ( 3) 00:07:03.615 25306.978 - 25407.803: 99.1396% ( 3) 00:07:03.615 25407.803 - 25508.628: 99.1792% ( 4) 00:07:03.615 25508.628 - 25609.452: 99.2089% ( 3) 00:07:03.615 25609.452 - 25710.277: 99.2385% ( 3) 00:07:03.615 25710.277 - 25811.102: 99.2781% ( 4) 00:07:03.615 25811.102 - 26012.751: 99.3374% ( 6) 00:07:03.615 26012.751 - 26214.400: 99.3671% ( 3) 00:07:03.615 31255.631 - 31457.280: 99.3869% ( 2) 00:07:03.615 31457.280 - 31658.929: 99.4462% ( 6) 00:07:03.615 31658.929 - 31860.578: 99.5055% ( 6) 00:07:03.615 31860.578 - 32062.228: 99.5649% ( 6) 00:07:03.615 32062.228 - 32263.877: 99.6341% ( 7) 00:07:03.615 32263.877 - 32465.526: 99.7033% ( 7) 00:07:03.615 32465.526 - 32667.175: 99.7725% ( 7) 00:07:03.615 32667.175 - 32868.825: 99.8418% ( 7) 00:07:03.615 32868.825 - 33070.474: 99.9011% ( 6) 00:07:03.615 33070.474 - 33272.123: 99.9703% ( 7) 00:07:03.615 33272.123 - 33473.772: 100.0000% ( 3) 00:07:03.615 00:07:03.615 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:03.615 ============================================================================== 00:07:03.615 Range in us Cumulative IO count 00:07:03.615 9023.803 - 9074.215: 0.0099% ( 1) 00:07:03.615 9074.215 - 9124.628: 0.0198% ( 1) 00:07:03.615 9124.628 - 9175.040: 0.0297% ( 1) 00:07:03.615 9175.040 - 9225.452: 0.0989% ( 7) 00:07:03.615 9225.452 - 9275.865: 0.2077% ( 11) 00:07:03.615 9275.865 - 9326.277: 0.3362% ( 13) 00:07:03.615 9326.277 - 9376.689: 0.4945% ( 16) 00:07:03.615 9376.689 - 9427.102: 0.6922% ( 20) 00:07:03.615 9427.102 - 9477.514: 0.9889% ( 30) 00:07:03.615 9477.514 - 9527.926: 1.3845% ( 40) 00:07:03.615 9527.926 - 9578.338: 1.7603% ( 38) 00:07:03.615 9578.338 - 9628.751: 2.0866% ( 33) 00:07:03.615 9628.751 - 9679.163: 2.4328% ( 35) 00:07:03.615 9679.163 - 9729.575: 2.8382% ( 41) 00:07:03.615 9729.575 - 9779.988: 3.4415% ( 61) 00:07:03.615 9779.988 - 9830.400: 3.9161% ( 48) 00:07:03.615 9830.400 - 9880.812: 4.3710% ( 46) 00:07:03.615 9880.812 - 9931.225: 4.8457% ( 48) 00:07:03.615 9931.225 - 9981.637: 5.5676% ( 73) 00:07:03.615 9981.637 - 10032.049: 6.3390% ( 78) 00:07:03.615 10032.049 - 10082.462: 7.2093% ( 88) 00:07:03.615 10082.462 - 10132.874: 8.4059% ( 121) 00:07:03.615 10132.874 - 10183.286: 9.3651% ( 97) 00:07:03.615 10183.286 - 10233.698: 10.9276% ( 158) 00:07:03.615 10233.698 - 10284.111: 12.2231% ( 131) 00:07:03.615 10284.111 - 10334.523: 13.5285% ( 132) 00:07:03.615 10334.523 - 10384.935: 14.9328% ( 142) 00:07:03.615 10384.935 - 10435.348: 16.4458% ( 153) 00:07:03.615 10435.348 - 10485.760: 17.9094% ( 148) 00:07:03.615 10485.760 - 10536.172: 19.2148% ( 132) 00:07:03.615 10536.172 - 10586.585: 20.3323% ( 113) 00:07:03.615 10586.585 - 10636.997: 21.5585% ( 124) 00:07:03.615 10636.997 - 10687.409: 22.3596% ( 81) 00:07:03.615 10687.409 - 10737.822: 23.2496% ( 90) 00:07:03.615 10737.822 - 10788.234: 24.2385% ( 100) 00:07:03.615 10788.234 - 10838.646: 25.0593% ( 83) 00:07:03.615 10838.646 - 10889.058: 25.9098% ( 86) 00:07:03.615 10889.058 - 10939.471: 26.7900% ( 89) 00:07:03.615 10939.471 - 10989.883: 27.8085% ( 103) 00:07:03.615 10989.883 - 11040.295: 28.8766% ( 108) 00:07:03.615 11040.295 - 11090.708: 29.5392% ( 67) 00:07:03.615 11090.708 - 11141.120: 30.2215% ( 69) 00:07:03.615 11141.120 - 11191.532: 30.9830% ( 77) 00:07:03.615 11191.532 - 11241.945: 31.9422% ( 97) 00:07:03.615 11241.945 - 11292.357: 32.9312% ( 100) 00:07:03.615 11292.357 - 11342.769: 33.7718% ( 85) 00:07:03.615 11342.769 - 11393.182: 34.8200% ( 106) 00:07:03.615 11393.182 - 11443.594: 35.6507% ( 84) 00:07:03.615 11443.594 - 11494.006: 36.6693% ( 103) 00:07:03.615 11494.006 - 11544.418: 37.6384% ( 98) 00:07:03.615 11544.418 - 11594.831: 38.7559% ( 113) 00:07:03.615 11594.831 - 11645.243: 40.1701% ( 143) 00:07:03.615 11645.243 - 11695.655: 41.4953% ( 134) 00:07:03.615 11695.655 - 11746.068: 42.7413% ( 126) 00:07:03.615 11746.068 - 11796.480: 43.9181% ( 119) 00:07:03.615 11796.480 - 11846.892: 45.1147% ( 121) 00:07:03.615 11846.892 - 11897.305: 46.6278% ( 153) 00:07:03.615 11897.305 - 11947.717: 47.6661% ( 105) 00:07:03.615 11947.717 - 11998.129: 48.6748% ( 102) 00:07:03.615 11998.129 - 12048.542: 49.5649% ( 90) 00:07:03.615 12048.542 - 12098.954: 50.3956% ( 84) 00:07:03.615 12098.954 - 12149.366: 51.0680% ( 68) 00:07:03.615 12149.366 - 12199.778: 51.9680% ( 91) 00:07:03.615 12199.778 - 12250.191: 52.6503% ( 69) 00:07:03.615 12250.191 - 12300.603: 53.3129% ( 67) 00:07:03.615 12300.603 - 12351.015: 54.0249% ( 72) 00:07:03.615 12351.015 - 12401.428: 54.5392% ( 52) 00:07:03.615 12401.428 - 12451.840: 55.1820% ( 65) 00:07:03.615 12451.840 - 12502.252: 55.8050% ( 63) 00:07:03.615 12502.252 - 12552.665: 56.5763% ( 78) 00:07:03.615 12552.665 - 12603.077: 57.2983% ( 73) 00:07:03.615 12603.077 - 12653.489: 58.1290% ( 84) 00:07:03.615 12653.489 - 12703.902: 59.0783% ( 96) 00:07:03.615 12703.902 - 12754.314: 59.9288% ( 86) 00:07:03.615 12754.314 - 12804.726: 60.8584% ( 94) 00:07:03.615 12804.726 - 12855.138: 61.7286% ( 88) 00:07:03.615 12855.138 - 12905.551: 62.6286% ( 91) 00:07:03.615 12905.551 - 13006.375: 64.0823% ( 147) 00:07:03.615 13006.375 - 13107.200: 65.3679% ( 130) 00:07:03.615 13107.200 - 13208.025: 66.7919% ( 144) 00:07:03.615 13208.025 - 13308.849: 67.8105% ( 103) 00:07:03.615 13308.849 - 13409.674: 68.9280% ( 113) 00:07:03.615 13409.674 - 13510.498: 70.1839% ( 127) 00:07:03.615 13510.498 - 13611.323: 72.0530% ( 189) 00:07:03.615 13611.323 - 13712.148: 73.5166% ( 148) 00:07:03.615 13712.148 - 13812.972: 74.9506% ( 145) 00:07:03.615 13812.972 - 13913.797: 76.6515% ( 172) 00:07:03.615 13913.797 - 14014.622: 77.8283% ( 119) 00:07:03.615 14014.622 - 14115.446: 79.0051% ( 119) 00:07:03.615 14115.446 - 14216.271: 80.0534% ( 106) 00:07:03.615 14216.271 - 14317.095: 80.9434% ( 90) 00:07:03.615 14317.095 - 14417.920: 81.7741% ( 84) 00:07:03.615 14417.920 - 14518.745: 82.5850% ( 82) 00:07:03.615 14518.745 - 14619.569: 83.5245% ( 95) 00:07:03.615 14619.569 - 14720.394: 84.6321% ( 112) 00:07:03.616 14720.394 - 14821.218: 85.9573% ( 134) 00:07:03.616 14821.218 - 14922.043: 86.9561% ( 101) 00:07:03.616 14922.043 - 15022.868: 87.8066% ( 86) 00:07:03.616 15022.868 - 15123.692: 88.5878% ( 79) 00:07:03.616 15123.692 - 15224.517: 89.3097% ( 73) 00:07:03.616 15224.517 - 15325.342: 90.0415% ( 74) 00:07:03.616 15325.342 - 15426.166: 90.7733% ( 74) 00:07:03.616 15426.166 - 15526.991: 91.1986% ( 43) 00:07:03.616 15526.991 - 15627.815: 91.4953% ( 30) 00:07:03.616 15627.815 - 15728.640: 91.8315% ( 34) 00:07:03.616 15728.640 - 15829.465: 92.2567% ( 43) 00:07:03.616 15829.465 - 15930.289: 92.5435% ( 29) 00:07:03.616 15930.289 - 16031.114: 92.8600% ( 32) 00:07:03.616 16031.114 - 16131.938: 93.1468% ( 29) 00:07:03.616 16131.938 - 16232.763: 93.4533% ( 31) 00:07:03.616 16232.763 - 16333.588: 93.8884% ( 44) 00:07:03.616 16333.588 - 16434.412: 94.3730% ( 49) 00:07:03.616 16434.412 - 16535.237: 94.8378% ( 47) 00:07:03.616 16535.237 - 16636.062: 95.2235% ( 39) 00:07:03.616 16636.062 - 16736.886: 95.7476% ( 53) 00:07:03.616 16736.886 - 16837.711: 96.2718% ( 53) 00:07:03.616 16837.711 - 16938.535: 96.6574% ( 39) 00:07:03.616 16938.535 - 17039.360: 96.9541% ( 30) 00:07:03.616 17039.360 - 17140.185: 97.2112% ( 26) 00:07:03.616 17140.185 - 17241.009: 97.4189% ( 21) 00:07:03.616 17241.009 - 17341.834: 97.5771% ( 16) 00:07:03.616 17341.834 - 17442.658: 97.6859% ( 11) 00:07:03.616 17442.658 - 17543.483: 97.7947% ( 11) 00:07:03.616 17543.483 - 17644.308: 97.8837% ( 9) 00:07:03.616 17644.308 - 17745.132: 97.9727% ( 9) 00:07:03.616 17745.132 - 17845.957: 98.0518% ( 8) 00:07:03.616 17845.957 - 17946.782: 98.1013% ( 5) 00:07:03.616 19660.800 - 19761.625: 98.1705% ( 7) 00:07:03.616 19761.625 - 19862.449: 98.2002% ( 3) 00:07:03.616 19862.449 - 19963.274: 98.2397% ( 4) 00:07:03.616 19963.274 - 20064.098: 98.2793% ( 4) 00:07:03.616 20064.098 - 20164.923: 98.3089% ( 3) 00:07:03.616 20164.923 - 20265.748: 98.3485% ( 4) 00:07:03.616 20265.748 - 20366.572: 98.3782% ( 3) 00:07:03.616 20366.572 - 20467.397: 98.4177% ( 4) 00:07:03.616 20467.397 - 20568.222: 98.4573% ( 4) 00:07:03.616 20568.222 - 20669.046: 98.4968% ( 4) 00:07:03.616 20669.046 - 20769.871: 98.5364% ( 4) 00:07:03.616 20769.871 - 20870.695: 98.5562% ( 2) 00:07:03.616 20870.695 - 20971.520: 98.5957% ( 4) 00:07:03.616 20971.520 - 21072.345: 98.6353% ( 4) 00:07:03.616 21072.345 - 21173.169: 98.6748% ( 4) 00:07:03.616 21173.169 - 21273.994: 98.7144% ( 4) 00:07:03.616 21273.994 - 21374.818: 98.7342% ( 2) 00:07:03.616 22584.714 - 22685.538: 98.7638% ( 3) 00:07:03.616 22685.538 - 22786.363: 98.7935% ( 3) 00:07:03.616 22786.363 - 22887.188: 98.8232% ( 3) 00:07:03.616 22887.188 - 22988.012: 98.8528% ( 3) 00:07:03.616 22988.012 - 23088.837: 98.8924% ( 4) 00:07:03.616 23088.837 - 23189.662: 98.9221% ( 3) 00:07:03.616 23189.662 - 23290.486: 98.9517% ( 3) 00:07:03.616 23290.486 - 23391.311: 98.9913% ( 4) 00:07:03.616 23391.311 - 23492.135: 99.0210% ( 3) 00:07:03.616 23492.135 - 23592.960: 99.0506% ( 3) 00:07:03.616 23592.960 - 23693.785: 99.0803% ( 3) 00:07:03.616 23693.785 - 23794.609: 99.1100% ( 3) 00:07:03.616 23794.609 - 23895.434: 99.1396% ( 3) 00:07:03.616 23895.434 - 23996.258: 99.1693% ( 3) 00:07:03.616 23996.258 - 24097.083: 99.2089% ( 4) 00:07:03.616 24097.083 - 24197.908: 99.2385% ( 3) 00:07:03.616 24197.908 - 24298.732: 99.2682% ( 3) 00:07:03.616 24298.732 - 24399.557: 99.2880% ( 2) 00:07:03.616 24399.557 - 24500.382: 99.3275% ( 4) 00:07:03.616 24500.382 - 24601.206: 99.3572% ( 3) 00:07:03.616 24601.206 - 24702.031: 99.3671% ( 1) 00:07:03.616 29844.086 - 30045.735: 99.4264% ( 6) 00:07:03.616 30045.735 - 30247.385: 99.4956% ( 7) 00:07:03.616 30247.385 - 30449.034: 99.5451% ( 5) 00:07:03.616 30449.034 - 30650.683: 99.6242% ( 8) 00:07:03.616 30650.683 - 30852.332: 99.7033% ( 8) 00:07:03.616 30852.332 - 31053.982: 99.7725% ( 7) 00:07:03.616 31053.982 - 31255.631: 99.8517% ( 8) 00:07:03.616 31255.631 - 31457.280: 99.9407% ( 9) 00:07:03.616 31457.280 - 31658.929: 100.0000% ( 6) 00:07:03.616 00:07:03.616 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:03.616 ============================================================================== 00:07:03.616 Range in us Cumulative IO count 00:07:03.616 9175.040 - 9225.452: 0.0197% ( 2) 00:07:03.616 9225.452 - 9275.865: 0.0688% ( 5) 00:07:03.616 9275.865 - 9326.277: 0.1867% ( 12) 00:07:03.616 9326.277 - 9376.689: 0.3341% ( 15) 00:07:03.616 9376.689 - 9427.102: 0.5405% ( 21) 00:07:03.616 9427.102 - 9477.514: 0.7174% ( 18) 00:07:03.616 9477.514 - 9527.926: 1.0711% ( 36) 00:07:03.616 9527.926 - 9578.338: 1.5527% ( 49) 00:07:03.616 9578.338 - 9628.751: 2.1521% ( 61) 00:07:03.616 9628.751 - 9679.163: 2.8105% ( 67) 00:07:03.616 9679.163 - 9729.575: 3.5476% ( 75) 00:07:03.616 9729.575 - 9779.988: 4.2453% ( 71) 00:07:03.616 9779.988 - 9830.400: 4.9430% ( 71) 00:07:03.616 9830.400 - 9880.812: 5.5130% ( 58) 00:07:03.616 9880.812 - 9931.225: 6.2991% ( 80) 00:07:03.616 9931.225 - 9981.637: 7.1541% ( 87) 00:07:03.616 9981.637 - 10032.049: 8.0090% ( 87) 00:07:03.616 10032.049 - 10082.462: 9.1785% ( 119) 00:07:03.616 10082.462 - 10132.874: 10.4461% ( 129) 00:07:03.616 10132.874 - 10183.286: 11.6745% ( 125) 00:07:03.616 10183.286 - 10233.698: 12.5884% ( 93) 00:07:03.616 10233.698 - 10284.111: 13.6399% ( 107) 00:07:03.616 10284.111 - 10334.523: 14.7602% ( 114) 00:07:03.616 10334.523 - 10384.935: 15.7921% ( 105) 00:07:03.616 10384.935 - 10435.348: 16.6961% ( 92) 00:07:03.616 10435.348 - 10485.760: 17.6592% ( 98) 00:07:03.616 10485.760 - 10536.172: 18.5142% ( 87) 00:07:03.616 10536.172 - 10586.585: 19.5362% ( 104) 00:07:03.616 10586.585 - 10636.997: 20.4697% ( 95) 00:07:03.616 10636.997 - 10687.409: 21.3050% ( 85) 00:07:03.616 10687.409 - 10737.822: 22.0224% ( 73) 00:07:03.616 10737.822 - 10788.234: 22.8872% ( 88) 00:07:03.616 10788.234 - 10838.646: 23.7520% ( 88) 00:07:03.616 10838.646 - 10889.058: 24.6364% ( 90) 00:07:03.616 10889.058 - 10939.471: 25.6781% ( 106) 00:07:03.616 10939.471 - 10989.883: 26.7492% ( 109) 00:07:03.616 10989.883 - 11040.295: 27.7516% ( 102) 00:07:03.616 11040.295 - 11090.708: 28.6065% ( 87) 00:07:03.616 11090.708 - 11141.120: 29.4517% ( 86) 00:07:03.616 11141.120 - 11191.532: 30.5523% ( 112) 00:07:03.616 11191.532 - 11241.945: 31.5448% ( 101) 00:07:03.616 11241.945 - 11292.357: 32.5865% ( 106) 00:07:03.616 11292.357 - 11342.769: 33.4807% ( 91) 00:07:03.616 11342.769 - 11393.182: 34.4438% ( 98) 00:07:03.616 11393.182 - 11443.594: 35.4167% ( 99) 00:07:03.616 11443.594 - 11494.006: 36.5468% ( 115) 00:07:03.616 11494.006 - 11544.418: 37.6474% ( 112) 00:07:03.616 11544.418 - 11594.831: 38.8463% ( 122) 00:07:03.616 11594.831 - 11645.243: 39.9273% ( 110) 00:07:03.616 11645.243 - 11695.655: 41.0377% ( 113) 00:07:03.616 11695.655 - 11746.068: 42.1973% ( 118) 00:07:03.616 11746.068 - 11796.480: 43.5829% ( 141) 00:07:03.616 11796.480 - 11846.892: 45.1258% ( 157) 00:07:03.616 11846.892 - 11897.305: 46.5998% ( 150) 00:07:03.616 11897.305 - 11947.717: 47.7987% ( 122) 00:07:03.616 11947.717 - 11998.129: 48.9485% ( 117) 00:07:03.616 11998.129 - 12048.542: 49.9312% ( 100) 00:07:03.616 12048.542 - 12098.954: 50.8353% ( 92) 00:07:03.616 12098.954 - 12149.366: 51.5330% ( 71) 00:07:03.616 12149.366 - 12199.778: 52.1325% ( 61) 00:07:03.616 12199.778 - 12250.191: 52.8498% ( 73) 00:07:03.616 12250.191 - 12300.603: 53.6065% ( 77) 00:07:03.616 12300.603 - 12351.015: 54.5008% ( 91) 00:07:03.616 12351.015 - 12401.428: 55.7881% ( 131) 00:07:03.616 12401.428 - 12451.840: 56.4465% ( 67) 00:07:03.616 12451.840 - 12502.252: 57.1148% ( 68) 00:07:03.616 12502.252 - 12552.665: 57.9206% ( 82) 00:07:03.616 12552.665 - 12603.077: 58.4513% ( 54) 00:07:03.616 12603.077 - 12653.489: 58.9819% ( 54) 00:07:03.616 12653.489 - 12703.902: 59.6010% ( 63) 00:07:03.616 12703.902 - 12754.314: 60.2300% ( 64) 00:07:03.616 12754.314 - 12804.726: 60.9473% ( 73) 00:07:03.616 12804.726 - 12855.138: 61.6647% ( 73) 00:07:03.616 12855.138 - 12905.551: 62.3526% ( 70) 00:07:03.616 12905.551 - 13006.375: 64.2787% ( 196) 00:07:03.616 13006.375 - 13107.200: 65.8412% ( 159) 00:07:03.616 13107.200 - 13208.025: 67.2366% ( 142) 00:07:03.616 13208.025 - 13308.849: 68.7991% ( 159) 00:07:03.616 13308.849 - 13409.674: 70.2142% ( 144) 00:07:03.616 13409.674 - 13510.498: 71.1969% ( 100) 00:07:03.616 13510.498 - 13611.323: 72.4057% ( 123) 00:07:03.616 13611.323 - 13712.148: 73.5063% ( 112) 00:07:03.616 13712.148 - 13812.972: 74.7150% ( 123) 00:07:03.616 13812.972 - 13913.797: 75.8943% ( 120) 00:07:03.616 13913.797 - 14014.622: 77.0539% ( 118) 00:07:03.616 14014.622 - 14115.446: 78.0366% ( 100) 00:07:03.616 14115.446 - 14216.271: 79.1274% ( 111) 00:07:03.616 14216.271 - 14317.095: 80.5818% ( 148) 00:07:03.616 14317.095 - 14417.920: 81.4465% ( 88) 00:07:03.616 14417.920 - 14518.745: 82.2327% ( 80) 00:07:03.616 14518.745 - 14619.569: 83.0582% ( 84) 00:07:03.616 14619.569 - 14720.394: 84.2472% ( 121) 00:07:03.616 14720.394 - 14821.218: 85.4953% ( 127) 00:07:03.616 14821.218 - 14922.043: 86.9300% ( 146) 00:07:03.616 14922.043 - 15022.868: 87.8538% ( 94) 00:07:03.616 15022.868 - 15123.692: 88.6498% ( 81) 00:07:03.616 15123.692 - 15224.517: 89.3868% ( 75) 00:07:03.616 15224.517 - 15325.342: 90.0452% ( 67) 00:07:03.616 15325.342 - 15426.166: 90.7134% ( 68) 00:07:03.616 15426.166 - 15526.991: 91.3522% ( 65) 00:07:03.616 15526.991 - 15627.815: 91.9713% ( 63) 00:07:03.616 15627.815 - 15728.640: 92.5118% ( 55) 00:07:03.616 15728.640 - 15829.465: 92.9147% ( 41) 00:07:03.616 15829.465 - 15930.289: 93.3766% ( 47) 00:07:03.616 15930.289 - 16031.114: 93.8090% ( 44) 00:07:03.617 16031.114 - 16131.938: 94.1333% ( 33) 00:07:03.617 16131.938 - 16232.763: 94.4182% ( 29) 00:07:03.617 16232.763 - 16333.588: 94.6443% ( 23) 00:07:03.617 16333.588 - 16434.412: 94.8703% ( 23) 00:07:03.617 16434.412 - 16535.237: 95.1160% ( 25) 00:07:03.617 16535.237 - 16636.062: 95.4108% ( 30) 00:07:03.617 16636.062 - 16736.886: 95.6270% ( 22) 00:07:03.617 16736.886 - 16837.711: 95.8923% ( 27) 00:07:03.617 16837.711 - 16938.535: 96.2362% ( 35) 00:07:03.617 16938.535 - 17039.360: 96.4819% ( 25) 00:07:03.617 17039.360 - 17140.185: 96.7374% ( 26) 00:07:03.617 17140.185 - 17241.009: 97.0126% ( 28) 00:07:03.617 17241.009 - 17341.834: 97.2976% ( 29) 00:07:03.617 17341.834 - 17442.658: 97.5825% ( 29) 00:07:03.617 17442.658 - 17543.483: 97.9363% ( 36) 00:07:03.617 17543.483 - 17644.308: 98.2999% ( 37) 00:07:03.617 17644.308 - 17745.132: 98.5554% ( 26) 00:07:03.617 17745.132 - 17845.957: 98.6635% ( 11) 00:07:03.617 17845.957 - 17946.782: 98.7127% ( 5) 00:07:03.617 17946.782 - 18047.606: 98.7421% ( 3) 00:07:03.617 19055.852 - 19156.677: 98.7618% ( 2) 00:07:03.617 19156.677 - 19257.502: 98.8208% ( 6) 00:07:03.617 19257.502 - 19358.326: 98.8502% ( 3) 00:07:03.617 19358.326 - 19459.151: 98.8797% ( 3) 00:07:03.617 19459.151 - 19559.975: 98.9190% ( 4) 00:07:03.617 19559.975 - 19660.800: 98.9485% ( 3) 00:07:03.617 19660.800 - 19761.625: 98.9780% ( 3) 00:07:03.617 19761.625 - 19862.449: 99.0075% ( 3) 00:07:03.617 19862.449 - 19963.274: 99.0369% ( 3) 00:07:03.617 19963.274 - 20064.098: 99.0763% ( 4) 00:07:03.617 20064.098 - 20164.923: 99.1156% ( 4) 00:07:03.617 20164.923 - 20265.748: 99.1647% ( 5) 00:07:03.617 20265.748 - 20366.572: 99.2040% ( 4) 00:07:03.617 20366.572 - 20467.397: 99.2433% ( 4) 00:07:03.617 20467.397 - 20568.222: 99.2925% ( 5) 00:07:03.617 20568.222 - 20669.046: 99.3318% ( 4) 00:07:03.617 20669.046 - 20769.871: 99.3711% ( 4) 00:07:03.617 22282.240 - 22383.065: 99.3809% ( 1) 00:07:03.617 22383.065 - 22483.889: 99.4202% ( 4) 00:07:03.617 22483.889 - 22584.714: 99.4497% ( 3) 00:07:03.617 22584.714 - 22685.538: 99.4792% ( 3) 00:07:03.617 22685.538 - 22786.363: 99.5185% ( 4) 00:07:03.617 22786.363 - 22887.188: 99.5578% ( 4) 00:07:03.617 22887.188 - 22988.012: 99.5873% ( 3) 00:07:03.617 22988.012 - 23088.837: 99.6266% ( 4) 00:07:03.617 23088.837 - 23189.662: 99.6659% ( 4) 00:07:03.617 23189.662 - 23290.486: 99.6954% ( 3) 00:07:03.617 23290.486 - 23391.311: 99.7347% ( 4) 00:07:03.617 23391.311 - 23492.135: 99.7642% ( 3) 00:07:03.617 23492.135 - 23592.960: 99.7936% ( 3) 00:07:03.617 23592.960 - 23693.785: 99.8231% ( 3) 00:07:03.617 23693.785 - 23794.609: 99.8526% ( 3) 00:07:03.617 23794.609 - 23895.434: 99.8919% ( 4) 00:07:03.617 23895.434 - 23996.258: 99.9214% ( 3) 00:07:03.617 23996.258 - 24097.083: 99.9607% ( 4) 00:07:03.617 24097.083 - 24197.908: 99.9803% ( 2) 00:07:03.617 24197.908 - 24298.732: 100.0000% ( 2) 00:07:03.617 00:07:03.617 23:44:36 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:03.617 00:07:03.617 real 0m2.545s 00:07:03.617 user 0m2.208s 00:07:03.617 sys 0m0.215s 00:07:03.617 23:44:36 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.617 23:44:36 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.617 ************************************ 00:07:03.617 END TEST nvme_perf 00:07:03.617 ************************************ 00:07:03.617 23:44:36 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:03.617 23:44:36 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:03.617 23:44:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.617 23:44:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.617 ************************************ 00:07:03.617 START TEST nvme_hello_world 00:07:03.617 ************************************ 00:07:03.617 23:44:36 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:03.875 Initializing NVMe Controllers 00:07:03.875 Attached to 0000:00:13.0 00:07:03.875 Namespace ID: 1 size: 1GB 00:07:03.875 Attached to 0000:00:10.0 00:07:03.875 Namespace ID: 1 size: 6GB 00:07:03.875 Attached to 0000:00:11.0 00:07:03.875 Namespace ID: 1 size: 5GB 00:07:03.875 Attached to 0000:00:12.0 00:07:03.875 Namespace ID: 1 size: 4GB 00:07:03.875 Namespace ID: 2 size: 4GB 00:07:03.875 Namespace ID: 3 size: 4GB 00:07:03.875 Initialization complete. 00:07:03.875 INFO: using host memory buffer for IO 00:07:03.875 Hello world! 00:07:03.875 INFO: using host memory buffer for IO 00:07:03.875 Hello world! 00:07:03.875 INFO: using host memory buffer for IO 00:07:03.875 Hello world! 00:07:03.875 INFO: using host memory buffer for IO 00:07:03.875 Hello world! 00:07:03.875 INFO: using host memory buffer for IO 00:07:03.875 Hello world! 00:07:03.875 INFO: using host memory buffer for IO 00:07:03.875 Hello world! 00:07:03.875 ************************************ 00:07:03.875 END TEST nvme_hello_world 00:07:03.875 ************************************ 00:07:03.875 00:07:03.875 real 0m0.246s 00:07:03.875 user 0m0.084s 00:07:03.875 sys 0m0.114s 00:07:03.875 23:44:36 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.875 23:44:36 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:03.875 23:44:36 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:03.875 23:44:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.875 23:44:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.875 23:44:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.875 ************************************ 00:07:03.875 START TEST nvme_sgl 00:07:03.875 ************************************ 00:07:03.875 23:44:36 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:04.133 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:04.133 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:04.133 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:04.133 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:04.133 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:04.133 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:04.133 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:04.133 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:04.133 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:04.133 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:04.133 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:04.133 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:04.133 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:04.133 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:04.133 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:04.133 NVMe Readv/Writev Request test 00:07:04.133 Attached to 0000:00:13.0 00:07:04.133 Attached to 0000:00:10.0 00:07:04.133 Attached to 0000:00:11.0 00:07:04.133 Attached to 0000:00:12.0 00:07:04.133 0000:00:10.0: build_io_request_2 test passed 00:07:04.133 0000:00:10.0: build_io_request_4 test passed 00:07:04.133 0000:00:10.0: build_io_request_5 test passed 00:07:04.133 0000:00:10.0: build_io_request_6 test passed 00:07:04.133 0000:00:10.0: build_io_request_7 test passed 00:07:04.133 0000:00:10.0: build_io_request_10 test passed 00:07:04.133 0000:00:11.0: build_io_request_2 test passed 00:07:04.133 0000:00:11.0: build_io_request_4 test passed 00:07:04.133 0000:00:11.0: build_io_request_5 test passed 00:07:04.133 0000:00:11.0: build_io_request_6 test passed 00:07:04.134 0000:00:11.0: build_io_request_7 test passed 00:07:04.134 0000:00:11.0: build_io_request_10 test passed 00:07:04.134 Cleaning up... 00:07:04.134 00:07:04.134 real 0m0.315s 00:07:04.134 user 0m0.153s 00:07:04.134 sys 0m0.111s 00:07:04.134 23:44:36 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.134 23:44:36 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:04.134 ************************************ 00:07:04.134 END TEST nvme_sgl 00:07:04.134 ************************************ 00:07:04.134 23:44:36 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:04.134 23:44:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.134 23:44:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.134 23:44:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.134 ************************************ 00:07:04.134 START TEST nvme_e2edp 00:07:04.134 ************************************ 00:07:04.134 23:44:36 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:04.390 NVMe Write/Read with End-to-End data protection test 00:07:04.390 Attached to 0000:00:13.0 00:07:04.390 Attached to 0000:00:10.0 00:07:04.390 Attached to 0000:00:11.0 00:07:04.390 Attached to 0000:00:12.0 00:07:04.390 Cleaning up... 00:07:04.390 00:07:04.390 real 0m0.242s 00:07:04.390 user 0m0.077s 00:07:04.390 sys 0m0.117s 00:07:04.390 23:44:37 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.390 23:44:37 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:04.390 ************************************ 00:07:04.390 END TEST nvme_e2edp 00:07:04.390 ************************************ 00:07:04.391 23:44:37 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:04.391 23:44:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.391 23:44:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.391 23:44:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.391 ************************************ 00:07:04.391 START TEST nvme_reserve 00:07:04.391 ************************************ 00:07:04.391 23:44:37 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:04.648 ===================================================== 00:07:04.648 NVMe Controller at PCI bus 0, device 19, function 0 00:07:04.648 ===================================================== 00:07:04.648 Reservations: Not Supported 00:07:04.648 ===================================================== 00:07:04.648 NVMe Controller at PCI bus 0, device 16, function 0 00:07:04.648 ===================================================== 00:07:04.648 Reservations: Not Supported 00:07:04.648 ===================================================== 00:07:04.648 NVMe Controller at PCI bus 0, device 17, function 0 00:07:04.648 ===================================================== 00:07:04.648 Reservations: Not Supported 00:07:04.648 ===================================================== 00:07:04.648 NVMe Controller at PCI bus 0, device 18, function 0 00:07:04.648 ===================================================== 00:07:04.648 Reservations: Not Supported 00:07:04.648 Reservation test passed 00:07:04.648 00:07:04.648 real 0m0.240s 00:07:04.648 user 0m0.080s 00:07:04.648 sys 0m0.103s 00:07:04.648 23:44:37 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.648 23:44:37 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:04.648 ************************************ 00:07:04.648 END TEST nvme_reserve 00:07:04.648 ************************************ 00:07:04.648 23:44:37 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:04.648 23:44:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.648 23:44:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.648 23:44:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.648 ************************************ 00:07:04.648 START TEST nvme_err_injection 00:07:04.648 ************************************ 00:07:04.648 23:44:37 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:04.906 NVMe Error Injection test 00:07:04.906 Attached to 0000:00:13.0 00:07:04.906 Attached to 0000:00:10.0 00:07:04.906 Attached to 0000:00:11.0 00:07:04.906 Attached to 0000:00:12.0 00:07:04.906 0000:00:12.0: get features failed as expected 00:07:04.906 0000:00:13.0: get features failed as expected 00:07:04.906 0000:00:10.0: get features failed as expected 00:07:04.906 0000:00:11.0: get features failed as expected 00:07:04.906 0000:00:13.0: get features successfully as expected 00:07:04.906 0000:00:10.0: get features successfully as expected 00:07:04.906 0000:00:11.0: get features successfully as expected 00:07:04.906 0000:00:12.0: get features successfully as expected 00:07:04.906 0000:00:13.0: read failed as expected 00:07:04.906 0000:00:10.0: read failed as expected 00:07:04.906 0000:00:11.0: read failed as expected 00:07:04.906 0000:00:12.0: read failed as expected 00:07:04.906 0000:00:13.0: read successfully as expected 00:07:04.906 0000:00:10.0: read successfully as expected 00:07:04.906 0000:00:11.0: read successfully as expected 00:07:04.906 0000:00:12.0: read successfully as expected 00:07:04.906 Cleaning up... 00:07:04.906 00:07:04.906 real 0m0.235s 00:07:04.906 user 0m0.089s 00:07:04.906 sys 0m0.104s 00:07:04.906 23:44:37 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.906 23:44:37 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:04.906 ************************************ 00:07:04.906 END TEST nvme_err_injection 00:07:04.906 ************************************ 00:07:05.164 23:44:37 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:05.165 23:44:37 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:05.165 23:44:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.165 23:44:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.165 ************************************ 00:07:05.165 START TEST nvme_overhead 00:07:05.165 ************************************ 00:07:05.165 23:44:37 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:06.538 Initializing NVMe Controllers 00:07:06.538 Attached to 0000:00:13.0 00:07:06.538 Attached to 0000:00:10.0 00:07:06.538 Attached to 0000:00:11.0 00:07:06.538 Attached to 0000:00:12.0 00:07:06.538 Initialization complete. Launching workers. 00:07:06.538 submit (in ns) avg, min, max = 12596.3, 10613.8, 478984.6 00:07:06.538 complete (in ns) avg, min, max = 7783.6, 7279.2, 124870.0 00:07:06.538 00:07:06.538 Submit histogram 00:07:06.538 ================ 00:07:06.538 Range in us Cumulative Count 00:07:06.538 10.585 - 10.634: 0.0114% ( 1) 00:07:06.538 11.126 - 11.175: 0.0343% ( 2) 00:07:06.538 11.274 - 11.323: 0.0457% ( 1) 00:07:06.538 11.422 - 11.471: 0.0572% ( 1) 00:07:06.538 11.471 - 11.520: 0.0686% ( 1) 00:07:06.538 11.569 - 11.618: 0.1143% ( 4) 00:07:06.538 11.618 - 11.668: 0.1715% ( 5) 00:07:06.538 11.668 - 11.717: 0.5145% ( 30) 00:07:06.538 11.717 - 11.766: 1.2577% ( 65) 00:07:06.538 11.766 - 11.815: 2.9271% ( 146) 00:07:06.538 11.815 - 11.865: 5.7169% ( 244) 00:07:06.538 11.865 - 11.914: 10.3247% ( 403) 00:07:06.538 11.914 - 11.963: 16.8306% ( 569) 00:07:06.538 11.963 - 12.012: 23.8280% ( 612) 00:07:06.538 12.012 - 12.062: 31.8203% ( 699) 00:07:06.538 12.062 - 12.111: 40.6243% ( 770) 00:07:06.538 12.111 - 12.160: 48.3650% ( 677) 00:07:06.538 12.160 - 12.209: 55.0766% ( 587) 00:07:06.538 12.209 - 12.258: 61.0908% ( 526) 00:07:06.538 12.258 - 12.308: 66.3275% ( 458) 00:07:06.538 12.308 - 12.357: 70.6837% ( 381) 00:07:06.538 12.357 - 12.406: 74.0567% ( 295) 00:07:06.538 12.406 - 12.455: 77.0066% ( 258) 00:07:06.538 12.455 - 12.505: 79.2820% ( 199) 00:07:06.538 12.505 - 12.554: 81.3515% ( 181) 00:07:06.538 12.554 - 12.603: 83.2266% ( 164) 00:07:06.538 12.603 - 12.702: 86.4967% ( 286) 00:07:06.538 12.702 - 12.800: 89.0350% ( 222) 00:07:06.538 12.800 - 12.898: 90.9787% ( 170) 00:07:06.538 12.898 - 12.997: 92.3965% ( 124) 00:07:06.538 12.997 - 13.095: 93.3570% ( 84) 00:07:06.538 13.095 - 13.194: 94.0773% ( 63) 00:07:06.538 13.194 - 13.292: 94.5118% ( 38) 00:07:06.538 13.292 - 13.391: 94.8662% ( 31) 00:07:06.538 13.391 - 13.489: 95.1178% ( 22) 00:07:06.538 13.489 - 13.588: 95.1978% ( 7) 00:07:06.538 13.588 - 13.686: 95.3464% ( 13) 00:07:06.538 13.686 - 13.785: 95.4379% ( 8) 00:07:06.538 13.785 - 13.883: 95.5065% ( 6) 00:07:06.538 13.883 - 13.982: 95.5523% ( 4) 00:07:06.538 13.982 - 14.080: 95.5980% ( 4) 00:07:06.538 14.080 - 14.178: 95.6209% ( 2) 00:07:06.538 14.178 - 14.277: 95.7123% ( 8) 00:07:06.538 14.277 - 14.375: 95.8381% ( 11) 00:07:06.538 14.375 - 14.474: 96.0096% ( 15) 00:07:06.538 14.474 - 14.572: 96.1125% ( 9) 00:07:06.538 14.572 - 14.671: 96.2497% ( 12) 00:07:06.538 14.671 - 14.769: 96.3984% ( 13) 00:07:06.538 14.769 - 14.868: 96.5584% ( 14) 00:07:06.538 14.868 - 14.966: 96.6385% ( 7) 00:07:06.538 14.966 - 15.065: 96.7757% ( 12) 00:07:06.538 15.065 - 15.163: 96.9014% ( 11) 00:07:06.538 15.163 - 15.262: 96.9700% ( 6) 00:07:06.538 15.262 - 15.360: 97.0958% ( 11) 00:07:06.538 15.360 - 15.458: 97.1873% ( 8) 00:07:06.538 15.458 - 15.557: 97.2788% ( 8) 00:07:06.538 15.557 - 15.655: 97.3016% ( 2) 00:07:06.538 15.655 - 15.754: 97.3245% ( 2) 00:07:06.538 15.754 - 15.852: 97.4045% ( 7) 00:07:06.538 15.852 - 15.951: 97.4846% ( 7) 00:07:06.538 15.951 - 16.049: 97.4960% ( 1) 00:07:06.538 16.049 - 16.148: 97.5532% ( 5) 00:07:06.538 16.148 - 16.246: 97.5760% ( 2) 00:07:06.538 16.246 - 16.345: 97.6446% ( 6) 00:07:06.538 16.345 - 16.443: 97.6789% ( 3) 00:07:06.538 16.542 - 16.640: 97.7361% ( 5) 00:07:06.538 16.640 - 16.738: 97.7818% ( 4) 00:07:06.538 16.738 - 16.837: 97.8047% ( 2) 00:07:06.538 16.837 - 16.935: 97.8276% ( 2) 00:07:06.538 16.935 - 17.034: 97.8390% ( 1) 00:07:06.538 17.034 - 17.132: 97.8619% ( 2) 00:07:06.538 17.132 - 17.231: 97.8962% ( 3) 00:07:06.538 17.329 - 17.428: 97.9190% ( 2) 00:07:06.538 17.428 - 17.526: 97.9419% ( 2) 00:07:06.538 17.526 - 17.625: 97.9648% ( 2) 00:07:06.538 17.625 - 17.723: 98.0105% ( 4) 00:07:06.538 17.723 - 17.822: 98.0334% ( 2) 00:07:06.538 17.822 - 17.920: 98.0791% ( 4) 00:07:06.538 17.920 - 18.018: 98.1134% ( 3) 00:07:06.538 18.018 - 18.117: 98.2163% ( 9) 00:07:06.538 18.117 - 18.215: 98.2621% ( 4) 00:07:06.538 18.215 - 18.314: 98.3078% ( 4) 00:07:06.538 18.314 - 18.412: 98.4107% ( 9) 00:07:06.538 18.412 - 18.511: 98.5365% ( 11) 00:07:06.538 18.511 - 18.609: 98.6279% ( 8) 00:07:06.538 18.609 - 18.708: 98.7537% ( 11) 00:07:06.538 18.708 - 18.806: 98.8452% ( 8) 00:07:06.538 18.806 - 18.905: 98.9481% ( 9) 00:07:06.538 18.905 - 19.003: 99.0053% ( 5) 00:07:06.538 19.003 - 19.102: 99.1082% ( 9) 00:07:06.538 19.102 - 19.200: 99.2339% ( 11) 00:07:06.538 19.200 - 19.298: 99.2911% ( 5) 00:07:06.538 19.298 - 19.397: 99.3254% ( 3) 00:07:06.538 19.397 - 19.495: 99.3483% ( 2) 00:07:06.538 19.495 - 19.594: 99.3940% ( 4) 00:07:06.538 19.594 - 19.692: 99.4169% ( 2) 00:07:06.538 19.692 - 19.791: 99.4626% ( 4) 00:07:06.538 19.889 - 19.988: 99.4969% ( 3) 00:07:06.538 19.988 - 20.086: 99.5198% ( 2) 00:07:06.538 20.086 - 20.185: 99.5312% ( 1) 00:07:06.538 20.283 - 20.382: 99.5426% ( 1) 00:07:06.538 20.382 - 20.480: 99.5655% ( 2) 00:07:06.538 20.480 - 20.578: 99.5884% ( 2) 00:07:06.538 20.775 - 20.874: 99.6227% ( 3) 00:07:06.538 21.071 - 21.169: 99.6341% ( 1) 00:07:06.538 21.169 - 21.268: 99.6456% ( 1) 00:07:06.538 21.268 - 21.366: 99.6570% ( 1) 00:07:06.538 21.760 - 21.858: 99.6799% ( 2) 00:07:06.538 22.055 - 22.154: 99.6913% ( 1) 00:07:06.538 22.646 - 22.745: 99.7027% ( 1) 00:07:06.538 22.843 - 22.942: 99.7142% ( 1) 00:07:06.538 22.942 - 23.040: 99.7256% ( 1) 00:07:06.538 23.138 - 23.237: 99.7370% ( 1) 00:07:06.538 23.434 - 23.532: 99.7485% ( 1) 00:07:06.538 23.631 - 23.729: 99.7599% ( 1) 00:07:06.538 23.729 - 23.828: 99.7713% ( 1) 00:07:06.538 23.926 - 24.025: 99.7828% ( 1) 00:07:06.538 24.123 - 24.222: 99.7942% ( 1) 00:07:06.538 24.222 - 24.320: 99.8399% ( 4) 00:07:06.538 24.714 - 24.812: 99.8514% ( 1) 00:07:06.538 25.009 - 25.108: 99.8628% ( 1) 00:07:06.539 26.585 - 26.782: 99.8742% ( 1) 00:07:06.539 28.357 - 28.554: 99.8857% ( 1) 00:07:06.539 30.917 - 31.114: 99.8971% ( 1) 00:07:06.539 31.508 - 31.705: 99.9085% ( 1) 00:07:06.539 38.400 - 38.597: 99.9200% ( 1) 00:07:06.539 46.671 - 46.868: 99.9314% ( 1) 00:07:06.539 64.985 - 65.378: 99.9428% ( 1) 00:07:06.539 90.585 - 90.978: 99.9543% ( 1) 00:07:06.539 129.969 - 130.757: 99.9657% ( 1) 00:07:06.539 242.609 - 244.185: 99.9771% ( 1) 00:07:06.539 286.720 - 288.295: 99.9886% ( 1) 00:07:06.539 478.917 - 482.068: 100.0000% ( 1) 00:07:06.539 00:07:06.539 Complete histogram 00:07:06.539 ================== 00:07:06.539 Range in us Cumulative Count 00:07:06.539 7.237 - 7.286: 0.0114% ( 1) 00:07:06.539 7.286 - 7.335: 0.0915% ( 7) 00:07:06.539 7.335 - 7.385: 1.2234% ( 99) 00:07:06.539 7.385 - 7.434: 8.6668% ( 651) 00:07:06.539 7.434 - 7.483: 26.7208% ( 1579) 00:07:06.539 7.483 - 7.532: 47.1530% ( 1787) 00:07:06.539 7.532 - 7.582: 63.8806% ( 1463) 00:07:06.539 7.582 - 7.631: 73.6565% ( 855) 00:07:06.539 7.631 - 7.680: 79.9909% ( 554) 00:07:06.539 7.680 - 7.729: 84.3243% ( 379) 00:07:06.539 7.729 - 7.778: 86.7254% ( 210) 00:07:06.539 7.778 - 7.828: 88.1775% ( 127) 00:07:06.539 7.828 - 7.877: 89.1036% ( 81) 00:07:06.539 7.877 - 7.926: 89.5838% ( 42) 00:07:06.539 7.926 - 7.975: 90.0869% ( 44) 00:07:06.539 7.975 - 8.025: 90.9787% ( 78) 00:07:06.539 8.025 - 8.074: 91.9620% ( 86) 00:07:06.539 8.074 - 8.123: 92.9682% ( 88) 00:07:06.539 8.123 - 8.172: 93.6428% ( 59) 00:07:06.539 8.172 - 8.222: 94.4432% ( 70) 00:07:06.539 8.222 - 8.271: 94.9234% ( 42) 00:07:06.539 8.271 - 8.320: 95.5180% ( 52) 00:07:06.539 8.320 - 8.369: 95.9753% ( 40) 00:07:06.539 8.369 - 8.418: 96.1582% ( 16) 00:07:06.539 8.418 - 8.468: 96.4212% ( 23) 00:07:06.539 8.468 - 8.517: 96.6385% ( 19) 00:07:06.539 8.517 - 8.566: 96.8900% ( 22) 00:07:06.539 8.566 - 8.615: 97.1530% ( 23) 00:07:06.539 8.615 - 8.665: 97.4160% ( 23) 00:07:06.539 8.665 - 8.714: 97.5074% ( 8) 00:07:06.539 8.714 - 8.763: 97.5646% ( 5) 00:07:06.539 8.763 - 8.812: 97.6103% ( 4) 00:07:06.539 8.812 - 8.862: 97.6904% ( 7) 00:07:06.539 8.862 - 8.911: 97.7475% ( 5) 00:07:06.539 8.911 - 8.960: 97.8047% ( 5) 00:07:06.539 8.960 - 9.009: 97.8504% ( 4) 00:07:06.539 9.009 - 9.058: 97.8733% ( 2) 00:07:06.539 9.058 - 9.108: 97.8847% ( 1) 00:07:06.539 9.108 - 9.157: 97.8962% ( 1) 00:07:06.539 9.157 - 9.206: 97.9076% ( 1) 00:07:06.539 9.354 - 9.403: 97.9190% ( 1) 00:07:06.539 9.452 - 9.502: 97.9419% ( 2) 00:07:06.539 9.551 - 9.600: 97.9534% ( 1) 00:07:06.539 9.600 - 9.649: 97.9648% ( 1) 00:07:06.539 9.797 - 9.846: 97.9762% ( 1) 00:07:06.539 9.895 - 9.945: 97.9877% ( 1) 00:07:06.539 9.945 - 9.994: 98.0105% ( 2) 00:07:06.539 10.338 - 10.388: 98.0220% ( 1) 00:07:06.539 10.437 - 10.486: 98.0448% ( 2) 00:07:06.539 10.486 - 10.535: 98.0563% ( 1) 00:07:06.539 10.535 - 10.585: 98.1020% ( 4) 00:07:06.539 10.634 - 10.683: 98.1249% ( 2) 00:07:06.539 10.831 - 10.880: 98.1363% ( 1) 00:07:06.539 10.880 - 10.929: 98.1477% ( 1) 00:07:06.539 10.978 - 11.028: 98.1592% ( 1) 00:07:06.539 11.077 - 11.126: 98.1820% ( 2) 00:07:06.539 11.126 - 11.175: 98.1935% ( 1) 00:07:06.539 11.175 - 11.225: 98.2049% ( 1) 00:07:06.539 11.225 - 11.274: 98.2163% ( 1) 00:07:06.539 11.422 - 11.471: 98.2278% ( 1) 00:07:06.539 11.520 - 11.569: 98.2621% ( 3) 00:07:06.539 11.717 - 11.766: 98.2735% ( 1) 00:07:06.539 12.062 - 12.111: 98.2849% ( 1) 00:07:06.539 12.111 - 12.160: 98.2964% ( 1) 00:07:06.539 12.258 - 12.308: 98.3078% ( 1) 00:07:06.539 12.603 - 12.702: 98.3192% ( 1) 00:07:06.539 12.800 - 12.898: 98.3307% ( 1) 00:07:06.539 12.898 - 12.997: 98.3421% ( 1) 00:07:06.539 12.997 - 13.095: 98.3535% ( 1) 00:07:06.539 13.391 - 13.489: 98.3650% ( 1) 00:07:06.539 13.489 - 13.588: 98.3993% ( 3) 00:07:06.539 13.588 - 13.686: 98.4564% ( 5) 00:07:06.539 13.686 - 13.785: 98.5136% ( 5) 00:07:06.539 13.785 - 13.883: 98.5479% ( 3) 00:07:06.539 13.883 - 13.982: 98.5822% ( 3) 00:07:06.539 13.982 - 14.080: 98.6508% ( 6) 00:07:06.539 14.080 - 14.178: 98.7423% ( 8) 00:07:06.539 14.178 - 14.277: 98.7537% ( 1) 00:07:06.539 14.277 - 14.375: 98.8452% ( 8) 00:07:06.539 14.375 - 14.474: 98.9481% ( 9) 00:07:06.539 14.474 - 14.572: 99.0510% ( 9) 00:07:06.539 14.572 - 14.671: 99.1539% ( 9) 00:07:06.539 14.671 - 14.769: 99.2454% ( 8) 00:07:06.539 14.769 - 14.868: 99.3254% ( 7) 00:07:06.539 14.868 - 14.966: 99.4169% ( 8) 00:07:06.539 14.966 - 15.065: 99.4397% ( 2) 00:07:06.539 15.065 - 15.163: 99.4740% ( 3) 00:07:06.539 15.163 - 15.262: 99.5426% ( 6) 00:07:06.539 15.262 - 15.360: 99.5769% ( 3) 00:07:06.539 15.360 - 15.458: 99.5998% ( 2) 00:07:06.539 15.458 - 15.557: 99.6227% ( 2) 00:07:06.539 15.557 - 15.655: 99.6456% ( 2) 00:07:06.539 15.754 - 15.852: 99.6684% ( 2) 00:07:06.539 15.852 - 15.951: 99.6799% ( 1) 00:07:06.539 16.148 - 16.246: 99.7027% ( 2) 00:07:06.539 17.526 - 17.625: 99.7142% ( 1) 00:07:06.539 17.625 - 17.723: 99.7256% ( 1) 00:07:06.539 17.822 - 17.920: 99.7370% ( 1) 00:07:06.539 17.920 - 18.018: 99.7485% ( 1) 00:07:06.539 18.117 - 18.215: 99.7713% ( 2) 00:07:06.539 18.412 - 18.511: 99.7828% ( 1) 00:07:06.539 18.905 - 19.003: 99.7942% ( 1) 00:07:06.539 19.102 - 19.200: 99.8056% ( 1) 00:07:06.539 19.200 - 19.298: 99.8171% ( 1) 00:07:06.539 19.298 - 19.397: 99.8285% ( 1) 00:07:06.539 19.495 - 19.594: 99.8399% ( 1) 00:07:06.539 19.594 - 19.692: 99.8514% ( 1) 00:07:06.539 19.692 - 19.791: 99.8628% ( 1) 00:07:06.539 20.874 - 20.972: 99.8742% ( 1) 00:07:06.539 21.366 - 21.465: 99.8857% ( 1) 00:07:06.539 21.563 - 21.662: 99.8971% ( 1) 00:07:06.539 22.055 - 22.154: 99.9085% ( 1) 00:07:06.539 22.449 - 22.548: 99.9200% ( 1) 00:07:06.539 25.994 - 26.191: 99.9314% ( 1) 00:07:06.539 32.492 - 32.689: 99.9428% ( 1) 00:07:06.539 33.674 - 33.871: 99.9543% ( 1) 00:07:06.539 39.188 - 39.385: 99.9657% ( 1) 00:07:06.539 90.191 - 90.585: 99.9771% ( 1) 00:07:06.539 105.551 - 106.338: 99.9886% ( 1) 00:07:06.539 124.455 - 125.243: 100.0000% ( 1) 00:07:06.539 00:07:06.539 00:07:06.539 real 0m1.241s 00:07:06.539 user 0m1.076s 00:07:06.539 sys 0m0.108s 00:07:06.539 23:44:38 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.539 ************************************ 00:07:06.539 23:44:38 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:06.539 END TEST nvme_overhead 00:07:06.539 ************************************ 00:07:06.539 23:44:38 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:06.540 23:44:38 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:06.540 23:44:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.540 23:44:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.540 ************************************ 00:07:06.540 START TEST nvme_arbitration 00:07:06.540 ************************************ 00:07:06.540 23:44:38 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:09.845 Initializing NVMe Controllers 00:07:09.845 Attached to 0000:00:13.0 00:07:09.845 Attached to 0000:00:10.0 00:07:09.845 Attached to 0000:00:11.0 00:07:09.845 Attached to 0000:00:12.0 00:07:09.845 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:09.845 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:09.845 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:09.845 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:09.845 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:09.845 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:09.845 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:09.845 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:09.845 Initialization complete. Launching workers. 00:07:09.845 Starting thread on core 1 with urgent priority queue 00:07:09.845 Starting thread on core 2 with urgent priority queue 00:07:09.845 Starting thread on core 3 with urgent priority queue 00:07:09.845 Starting thread on core 0 with urgent priority queue 00:07:09.845 QEMU NVMe Ctrl (12343 ) core 0: 832.00 IO/s 120.19 secs/100000 ios 00:07:09.845 QEMU NVMe Ctrl (12342 ) core 0: 832.00 IO/s 120.19 secs/100000 ios 00:07:09.845 QEMU NVMe Ctrl (12340 ) core 1: 810.67 IO/s 123.36 secs/100000 ios 00:07:09.845 QEMU NVMe Ctrl (12342 ) core 1: 810.67 IO/s 123.36 secs/100000 ios 00:07:09.845 QEMU NVMe Ctrl (12341 ) core 2: 832.00 IO/s 120.19 secs/100000 ios 00:07:09.845 QEMU NVMe Ctrl (12342 ) core 3: 810.67 IO/s 123.36 secs/100000 ios 00:07:09.845 ======================================================== 00:07:09.845 00:07:09.845 00:07:09.845 real 0m3.327s 00:07:09.845 user 0m9.226s 00:07:09.845 sys 0m0.129s 00:07:09.845 23:44:42 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.845 23:44:42 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:09.845 ************************************ 00:07:09.845 END TEST nvme_arbitration 00:07:09.845 ************************************ 00:07:09.845 23:44:42 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:09.845 23:44:42 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:09.845 23:44:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.845 23:44:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.845 ************************************ 00:07:09.845 START TEST nvme_single_aen 00:07:09.845 ************************************ 00:07:09.845 23:44:42 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:09.845 Asynchronous Event Request test 00:07:09.845 Attached to 0000:00:13.0 00:07:09.845 Attached to 0000:00:10.0 00:07:09.845 Attached to 0000:00:11.0 00:07:09.845 Attached to 0000:00:12.0 00:07:09.846 Reset controller to setup AER completions for this process 00:07:09.846 Registering asynchronous event callbacks... 00:07:09.846 Getting orig temperature thresholds of all controllers 00:07:09.846 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:09.846 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:09.846 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:09.846 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:09.846 Setting all controllers temperature threshold low to trigger AER 00:07:09.846 Waiting for all controllers temperature threshold to be set lower 00:07:09.846 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:09.846 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:09.846 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:09.846 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:09.846 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:09.846 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:09.846 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:09.846 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:09.846 Waiting for all controllers to trigger AER and reset threshold 00:07:09.846 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:09.846 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:09.846 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:09.846 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:09.846 Cleaning up... 00:07:09.846 ************************************ 00:07:09.846 END TEST nvme_single_aen 00:07:09.846 ************************************ 00:07:09.846 00:07:09.846 real 0m0.229s 00:07:09.846 user 0m0.077s 00:07:09.846 sys 0m0.102s 00:07:09.846 23:44:42 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.846 23:44:42 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:10.104 23:44:42 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:10.104 23:44:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.104 23:44:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.104 23:44:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.104 ************************************ 00:07:10.104 START TEST nvme_doorbell_aers 00:07:10.104 ************************************ 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:10.104 23:44:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:10.363 [2024-12-05 23:44:42.870583] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:20.329 Executing: test_write_invalid_db 00:07:20.329 Waiting for AER completion... 00:07:20.329 Failure: test_write_invalid_db 00:07:20.329 00:07:20.329 Executing: test_invalid_db_write_overflow_sq 00:07:20.329 Waiting for AER completion... 00:07:20.329 Failure: test_invalid_db_write_overflow_sq 00:07:20.329 00:07:20.329 Executing: test_invalid_db_write_overflow_cq 00:07:20.329 Waiting for AER completion... 00:07:20.329 Failure: test_invalid_db_write_overflow_cq 00:07:20.329 00:07:20.329 23:44:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:20.329 23:44:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:20.329 [2024-12-05 23:44:52.888027] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:30.293 Executing: test_write_invalid_db 00:07:30.293 Waiting for AER completion... 00:07:30.293 Failure: test_write_invalid_db 00:07:30.293 00:07:30.293 Executing: test_invalid_db_write_overflow_sq 00:07:30.293 Waiting for AER completion... 00:07:30.293 Failure: test_invalid_db_write_overflow_sq 00:07:30.293 00:07:30.293 Executing: test_invalid_db_write_overflow_cq 00:07:30.293 Waiting for AER completion... 00:07:30.293 Failure: test_invalid_db_write_overflow_cq 00:07:30.293 00:07:30.293 23:45:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:30.293 23:45:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:30.293 [2024-12-05 23:45:02.928230] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:40.243 Executing: test_write_invalid_db 00:07:40.243 Waiting for AER completion... 00:07:40.243 Failure: test_write_invalid_db 00:07:40.243 00:07:40.243 Executing: test_invalid_db_write_overflow_sq 00:07:40.243 Waiting for AER completion... 00:07:40.243 Failure: test_invalid_db_write_overflow_sq 00:07:40.243 00:07:40.243 Executing: test_invalid_db_write_overflow_cq 00:07:40.243 Waiting for AER completion... 00:07:40.243 Failure: test_invalid_db_write_overflow_cq 00:07:40.243 00:07:40.243 23:45:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:40.243 23:45:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:40.500 [2024-12-05 23:45:12.965604] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.456 Executing: test_write_invalid_db 00:07:50.456 Waiting for AER completion... 00:07:50.456 Failure: test_write_invalid_db 00:07:50.456 00:07:50.456 Executing: test_invalid_db_write_overflow_sq 00:07:50.456 Waiting for AER completion... 00:07:50.456 Failure: test_invalid_db_write_overflow_sq 00:07:50.456 00:07:50.456 Executing: test_invalid_db_write_overflow_cq 00:07:50.456 Waiting for AER completion... 00:07:50.456 Failure: test_invalid_db_write_overflow_cq 00:07:50.456 00:07:50.456 00:07:50.456 real 0m40.188s 00:07:50.456 user 0m34.074s 00:07:50.456 sys 0m5.709s 00:07:50.456 23:45:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.456 23:45:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:07:50.456 ************************************ 00:07:50.456 END TEST nvme_doorbell_aers 00:07:50.456 ************************************ 00:07:50.456 23:45:22 nvme -- nvme/nvme.sh@97 -- # uname 00:07:50.456 23:45:22 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:07:50.456 23:45:22 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:50.456 23:45:22 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:50.456 23:45:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.456 23:45:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.456 ************************************ 00:07:50.456 START TEST nvme_multi_aen 00:07:50.456 ************************************ 00:07:50.456 23:45:22 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:50.456 [2024-12-05 23:45:23.007002] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.007068] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.007081] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.009831] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.010257] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.010449] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.012533] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.012667] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.012752] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.013885] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.014031] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 [2024-12-05 23:45:23.014110] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63225) is not found. Dropping the request. 00:07:50.457 Child process pid: 63751 00:07:50.713 [Child] Asynchronous Event Request test 00:07:50.713 [Child] Attached to 0000:00:13.0 00:07:50.713 [Child] Attached to 0000:00:10.0 00:07:50.713 [Child] Attached to 0000:00:11.0 00:07:50.713 [Child] Attached to 0000:00:12.0 00:07:50.713 [Child] Registering asynchronous event callbacks... 00:07:50.713 [Child] Getting orig temperature thresholds of all controllers 00:07:50.713 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.713 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.713 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.713 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.713 [Child] Waiting for all controllers to trigger AER and reset threshold 00:07:50.713 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.713 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.713 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.713 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.713 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.713 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.713 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.713 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.713 [Child] Cleaning up... 00:07:50.713 Asynchronous Event Request test 00:07:50.713 Attached to 0000:00:13.0 00:07:50.713 Attached to 0000:00:10.0 00:07:50.713 Attached to 0000:00:11.0 00:07:50.713 Attached to 0000:00:12.0 00:07:50.713 Reset controller to setup AER completions for this process 00:07:50.713 Registering asynchronous event callbacks... 00:07:50.713 Getting orig temperature thresholds of all controllers 00:07:50.713 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.713 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.713 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.713 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.713 Setting all controllers temperature threshold low to trigger AER 00:07:50.713 Waiting for all controllers temperature threshold to be set lower 00:07:50.713 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.713 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:50.713 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.713 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:50.714 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.714 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:50.714 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.714 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:50.714 Waiting for all controllers to trigger AER and reset threshold 00:07:50.714 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.714 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.714 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.714 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.714 Cleaning up... 00:07:50.714 00:07:50.714 real 0m0.448s 00:07:50.714 user 0m0.143s 00:07:50.714 sys 0m0.196s 00:07:50.714 23:45:23 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.714 23:45:23 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:07:50.714 ************************************ 00:07:50.714 END TEST nvme_multi_aen 00:07:50.714 ************************************ 00:07:50.714 23:45:23 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:50.714 23:45:23 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:50.714 23:45:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.714 23:45:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.714 ************************************ 00:07:50.714 START TEST nvme_startup 00:07:50.714 ************************************ 00:07:50.714 23:45:23 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:50.971 Initializing NVMe Controllers 00:07:50.971 Attached to 0000:00:13.0 00:07:50.971 Attached to 0000:00:10.0 00:07:50.971 Attached to 0000:00:11.0 00:07:50.971 Attached to 0000:00:12.0 00:07:50.971 Initialization complete. 00:07:50.971 Time used:167543.969 (us). 00:07:50.971 00:07:50.971 real 0m0.232s 00:07:50.971 user 0m0.073s 00:07:50.971 sys 0m0.100s 00:07:50.971 23:45:23 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.971 ************************************ 00:07:50.971 END TEST nvme_startup 00:07:50.971 ************************************ 00:07:50.971 23:45:23 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:07:50.971 23:45:23 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:07:50.971 23:45:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.971 23:45:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.971 23:45:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.971 ************************************ 00:07:50.971 START TEST nvme_multi_secondary 00:07:50.971 ************************************ 00:07:50.971 23:45:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:07:50.971 23:45:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63807 00:07:50.971 23:45:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:07:50.971 23:45:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63808 00:07:50.971 23:45:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:07:50.971 23:45:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:54.285 Initializing NVMe Controllers 00:07:54.285 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.285 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.285 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.285 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.285 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:54.285 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:54.285 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:54.285 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:54.285 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:54.285 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:54.285 Initialization complete. Launching workers. 00:07:54.285 ======================================================== 00:07:54.285 Latency(us) 00:07:54.285 Device Information : IOPS MiB/s Average min max 00:07:54.285 PCIE (0000:00:13.0) NSID 1 from core 1: 7311.78 28.56 2187.82 743.65 7633.02 00:07:54.285 PCIE (0000:00:10.0) NSID 1 from core 1: 7308.12 28.55 2187.95 780.07 7010.34 00:07:54.285 PCIE (0000:00:11.0) NSID 1 from core 1: 7301.78 28.52 2190.81 713.50 6088.01 00:07:54.285 PCIE (0000:00:12.0) NSID 1 from core 1: 7332.11 28.64 2181.82 635.08 5910.76 00:07:54.285 PCIE (0000:00:12.0) NSID 2 from core 1: 7316.78 28.58 2186.35 637.07 5306.63 00:07:54.285 PCIE (0000:00:12.0) NSID 3 from core 1: 7312.78 28.57 2187.56 700.73 5364.28 00:07:54.285 ======================================================== 00:07:54.285 Total : 43883.35 171.42 2187.05 635.08 7633.02 00:07:54.285 00:07:54.285 Initializing NVMe Controllers 00:07:54.285 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.285 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.285 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.285 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.285 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:54.285 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:54.285 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:54.285 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:54.285 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:54.285 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:54.285 Initialization complete. Launching workers. 00:07:54.285 ======================================================== 00:07:54.286 Latency(us) 00:07:54.286 Device Information : IOPS MiB/s Average min max 00:07:54.286 PCIE (0000:00:13.0) NSID 1 from core 2: 2982.97 11.65 5363.36 986.43 18466.01 00:07:54.286 PCIE (0000:00:10.0) NSID 1 from core 2: 2982.97 11.65 5362.62 1111.79 14825.33 00:07:54.286 PCIE (0000:00:11.0) NSID 1 from core 2: 2982.97 11.65 5362.95 1124.73 14853.24 00:07:54.286 PCIE (0000:00:12.0) NSID 1 from core 2: 2982.97 11.65 5362.48 1100.12 14497.17 00:07:54.286 PCIE (0000:00:12.0) NSID 2 from core 2: 2982.97 11.65 5363.57 1112.26 14554.02 00:07:54.286 PCIE (0000:00:12.0) NSID 3 from core 2: 2982.97 11.65 5363.54 1070.74 18374.98 00:07:54.286 ======================================================== 00:07:54.286 Total : 17897.82 69.91 5363.09 986.43 18466.01 00:07:54.286 00:07:54.286 23:45:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63807 00:07:56.806 Initializing NVMe Controllers 00:07:56.806 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:56.806 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.806 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.806 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:56.806 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:56.806 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:56.806 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:56.806 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:56.806 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:56.806 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:56.806 Initialization complete. Launching workers. 00:07:56.806 ======================================================== 00:07:56.806 Latency(us) 00:07:56.806 Device Information : IOPS MiB/s Average min max 00:07:56.806 PCIE (0000:00:13.0) NSID 1 from core 0: 10469.32 40.90 1527.89 714.95 6378.65 00:07:56.806 PCIE (0000:00:10.0) NSID 1 from core 0: 10469.12 40.89 1526.98 701.74 6323.71 00:07:56.806 PCIE (0000:00:11.0) NSID 1 from core 0: 10469.12 40.89 1527.86 719.82 6313.97 00:07:56.806 PCIE (0000:00:12.0) NSID 1 from core 0: 10469.32 40.90 1527.80 707.17 7044.00 00:07:56.806 PCIE (0000:00:12.0) NSID 2 from core 0: 10469.32 40.90 1527.78 662.30 6811.87 00:07:56.806 PCIE (0000:00:12.0) NSID 3 from core 0: 10469.32 40.90 1527.76 615.29 6469.46 00:07:56.806 ======================================================== 00:07:56.806 Total : 62815.49 245.37 1527.68 615.29 7044.00 00:07:56.806 00:07:56.806 23:45:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63808 00:07:56.806 23:45:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63877 00:07:56.806 23:45:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:07:56.806 23:45:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63878 00:07:56.806 23:45:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:07:56.806 23:45:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:00.081 Initializing NVMe Controllers 00:08:00.081 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.081 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.081 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.081 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.081 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:00.081 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:00.081 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:00.081 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:00.081 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:00.081 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:00.081 Initialization complete. Launching workers. 00:08:00.081 ======================================================== 00:08:00.081 Latency(us) 00:08:00.081 Device Information : IOPS MiB/s Average min max 00:08:00.081 PCIE (0000:00:13.0) NSID 1 from core 0: 7335.11 28.65 2180.79 814.67 5486.04 00:08:00.081 PCIE (0000:00:10.0) NSID 1 from core 0: 7335.44 28.65 2179.53 781.17 6286.30 00:08:00.081 PCIE (0000:00:11.0) NSID 1 from core 0: 7335.44 28.65 2180.40 802.59 6169.28 00:08:00.081 PCIE (0000:00:12.0) NSID 1 from core 0: 7335.44 28.65 2180.21 812.43 6138.46 00:08:00.081 PCIE (0000:00:12.0) NSID 2 from core 0: 7335.44 28.65 2180.05 808.65 5076.72 00:08:00.081 PCIE (0000:00:12.0) NSID 3 from core 0: 7335.44 28.65 2179.85 807.96 5448.40 00:08:00.081 ======================================================== 00:08:00.081 Total : 44012.32 171.92 2180.14 781.17 6286.30 00:08:00.081 00:08:00.081 Initializing NVMe Controllers 00:08:00.081 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.081 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.081 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.081 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.081 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:00.081 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:00.081 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:00.081 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:00.081 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:00.081 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:00.081 Initialization complete. Launching workers. 00:08:00.081 ======================================================== 00:08:00.081 Latency(us) 00:08:00.081 Device Information : IOPS MiB/s Average min max 00:08:00.081 PCIE (0000:00:13.0) NSID 1 from core 1: 7514.39 29.35 2128.75 721.70 5332.50 00:08:00.081 PCIE (0000:00:10.0) NSID 1 from core 1: 7514.39 29.35 2127.63 693.26 5535.08 00:08:00.081 PCIE (0000:00:11.0) NSID 1 from core 1: 7514.39 29.35 2128.51 727.58 5614.41 00:08:00.081 PCIE (0000:00:12.0) NSID 1 from core 1: 7514.39 29.35 2128.36 715.23 5433.00 00:08:00.081 PCIE (0000:00:12.0) NSID 2 from core 1: 7514.39 29.35 2128.23 659.92 4986.12 00:08:00.081 PCIE (0000:00:12.0) NSID 3 from core 1: 7514.39 29.35 2128.10 608.02 5053.48 00:08:00.081 ======================================================== 00:08:00.081 Total : 45086.33 176.12 2128.26 608.02 5614.41 00:08:00.081 00:08:01.975 Initializing NVMe Controllers 00:08:01.975 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:01.975 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:01.975 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:01.975 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:01.975 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:01.975 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:01.975 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:01.975 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:01.975 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:01.975 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:01.975 Initialization complete. Launching workers. 00:08:01.975 ======================================================== 00:08:01.975 Latency(us) 00:08:01.975 Device Information : IOPS MiB/s Average min max 00:08:01.975 PCIE (0000:00:13.0) NSID 1 from core 2: 4342.09 16.96 3683.92 737.42 27106.38 00:08:01.975 PCIE (0000:00:10.0) NSID 1 from core 2: 4342.09 16.96 3686.02 748.73 27053.44 00:08:01.975 PCIE (0000:00:11.0) NSID 1 from core 2: 4342.09 16.96 3687.46 745.29 17924.45 00:08:01.975 PCIE (0000:00:12.0) NSID 1 from core 2: 4342.09 16.96 3687.39 744.60 27040.58 00:08:01.975 PCIE (0000:00:12.0) NSID 2 from core 2: 4342.09 16.96 3687.15 727.58 27447.16 00:08:01.975 PCIE (0000:00:12.0) NSID 3 from core 2: 4342.09 16.96 3687.09 711.19 27188.78 00:08:01.975 ======================================================== 00:08:01.975 Total : 26052.52 101.77 3686.51 711.19 27447.16 00:08:01.975 00:08:01.975 ************************************ 00:08:01.975 END TEST nvme_multi_secondary 00:08:01.975 ************************************ 00:08:01.975 23:45:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63877 00:08:01.975 23:45:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63878 00:08:01.975 00:08:01.975 real 0m10.699s 00:08:01.975 user 0m18.321s 00:08:01.975 sys 0m0.597s 00:08:01.975 23:45:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.975 23:45:34 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:01.975 23:45:34 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:01.975 23:45:34 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:01.975 23:45:34 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62822 ]] 00:08:01.975 23:45:34 nvme -- common/autotest_common.sh@1094 -- # kill 62822 00:08:01.975 23:45:34 nvme -- common/autotest_common.sh@1095 -- # wait 62822 00:08:01.975 [2024-12-05 23:45:34.319998] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.975 [2024-12-05 23:45:34.320125] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.975 [2024-12-05 23:45:34.320150] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.975 [2024-12-05 23:45:34.320165] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.975 [2024-12-05 23:45:34.322230] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.975 [2024-12-05 23:45:34.322278] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.975 [2024-12-05 23:45:34.322291] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.975 [2024-12-05 23:45:34.322306] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.975 [2024-12-05 23:45:34.324295] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.975 [2024-12-05 23:45:34.324480] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.976 [2024-12-05 23:45:34.324499] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.976 [2024-12-05 23:45:34.324514] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.976 [2024-12-05 23:45:34.326453] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.976 [2024-12-05 23:45:34.326503] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.976 [2024-12-05 23:45:34.326518] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.976 [2024-12-05 23:45:34.326534] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63750) is not found. Dropping the request. 00:08:01.976 [2024-12-05 23:45:34.436930] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:01.976 23:45:34 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:01.976 23:45:34 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:01.976 23:45:34 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:01.976 23:45:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.976 23:45:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.976 23:45:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.976 ************************************ 00:08:01.976 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:01.976 ************************************ 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:01.976 * Looking for test storage... 00:08:01.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.976 --rc genhtml_branch_coverage=1 00:08:01.976 --rc genhtml_function_coverage=1 00:08:01.976 --rc genhtml_legend=1 00:08:01.976 --rc geninfo_all_blocks=1 00:08:01.976 --rc geninfo_unexecuted_blocks=1 00:08:01.976 00:08:01.976 ' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.976 --rc genhtml_branch_coverage=1 00:08:01.976 --rc genhtml_function_coverage=1 00:08:01.976 --rc genhtml_legend=1 00:08:01.976 --rc geninfo_all_blocks=1 00:08:01.976 --rc geninfo_unexecuted_blocks=1 00:08:01.976 00:08:01.976 ' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.976 --rc genhtml_branch_coverage=1 00:08:01.976 --rc genhtml_function_coverage=1 00:08:01.976 --rc genhtml_legend=1 00:08:01.976 --rc geninfo_all_blocks=1 00:08:01.976 --rc geninfo_unexecuted_blocks=1 00:08:01.976 00:08:01.976 ' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.976 --rc genhtml_branch_coverage=1 00:08:01.976 --rc genhtml_function_coverage=1 00:08:01.976 --rc genhtml_legend=1 00:08:01.976 --rc geninfo_all_blocks=1 00:08:01.976 --rc geninfo_unexecuted_blocks=1 00:08:01.976 00:08:01.976 ' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:01.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64040 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64040 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64040 ']' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.976 23:45:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:02.234 [2024-12-05 23:45:34.722025] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:08:02.234 [2024-12-05 23:45:34.722315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64040 ] 00:08:02.234 [2024-12-05 23:45:34.891609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.494 [2024-12-05 23:45:34.996441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.494 [2024-12-05 23:45:34.996844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.494 [2024-12-05 23:45:34.996867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.494 [2024-12-05 23:45:34.996681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:03.063 nvme0n1 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_zGn5f.txt 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:03.063 true 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733442335 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64063 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:03.063 23:45:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:05.585 [2024-12-05 23:45:37.698570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:05.585 [2024-12-05 23:45:37.698827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:05.585 [2024-12-05 23:45:37.698851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:05.585 [2024-12-05 23:45:37.698864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.585 [2024-12-05 23:45:37.700566] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:05.585 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64063 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64063 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64063 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_zGn5f.txt 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_zGn5f.txt 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64040 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64040 ']' 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64040 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64040 00:08:05.585 killing process with pid 64040 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.585 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.586 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64040' 00:08:05.586 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64040 00:08:05.586 23:45:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64040 00:08:06.964 23:45:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:06.964 23:45:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:06.964 00:08:06.964 real 0m4.889s 00:08:06.964 user 0m17.383s 00:08:06.964 sys 0m0.514s 00:08:06.964 ************************************ 00:08:06.964 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:06.964 ************************************ 00:08:06.964 23:45:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.964 23:45:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:06.964 23:45:39 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:06.964 23:45:39 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:06.964 23:45:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.964 23:45:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.964 23:45:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.964 ************************************ 00:08:06.964 START TEST nvme_fio 00:08:06.964 ************************************ 00:08:06.964 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:06.964 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:06.964 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:06.964 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:06.964 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:06.964 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:06.964 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:06.964 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:06.964 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:06.964 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:06.964 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:06.964 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:06.964 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:06.964 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:06.964 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:06.964 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:07.222 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:07.222 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:07.481 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:07.481 23:45:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:07.481 23:45:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:07.481 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:07.481 fio-3.35 00:08:07.481 Starting 1 thread 00:08:15.637 00:08:15.637 test: (groupid=0, jobs=1): err= 0: pid=64202: Thu Dec 5 23:45:47 2024 00:08:15.637 read: IOPS=19.1k, BW=74.6MiB/s (78.2MB/s)(149MiB/2001msec) 00:08:15.637 slat (usec): min=3, max=280, avg= 5.67, stdev= 3.65 00:08:15.637 clat (usec): min=251, max=41753, avg=3328.57, stdev=1154.93 00:08:15.637 lat (usec): min=256, max=41758, avg=3334.25, stdev=1156.35 00:08:15.637 clat percentiles (usec): 00:08:15.638 | 1.00th=[ 2073], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2507], 00:08:15.638 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 3097], 00:08:15.638 | 70.00th=[ 3490], 80.00th=[ 4178], 90.00th=[ 5014], 95.00th=[ 5669], 00:08:15.638 | 99.00th=[ 6980], 99.50th=[ 7635], 99.90th=[ 8979], 99.95th=[ 9372], 00:08:15.638 | 99.99th=[11994] 00:08:15.638 bw ( KiB/s): min=64440, max=79248, per=96.38%, avg=73637.33, stdev=8029.16, samples=3 00:08:15.638 iops : min=16110, max=19812, avg=18409.33, stdev=2007.29, samples=3 00:08:15.638 write: IOPS=19.1k, BW=74.6MiB/s (78.2MB/s)(149MiB/2001msec); 0 zone resets 00:08:15.638 slat (usec): min=3, max=325, avg= 5.87, stdev= 3.49 00:08:15.638 clat (usec): min=300, max=11935, avg=3349.98, stdev=1134.75 00:08:15.638 lat (usec): min=305, max=11949, avg=3355.86, stdev=1136.21 00:08:15.638 clat percentiles (usec): 00:08:15.638 | 1.00th=[ 2089], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2540], 00:08:15.638 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 3130], 00:08:15.638 | 70.00th=[ 3523], 80.00th=[ 4228], 90.00th=[ 5014], 95.00th=[ 5735], 00:08:15.638 | 99.00th=[ 7046], 99.50th=[ 7767], 99.90th=[ 8717], 99.95th=[ 9241], 00:08:15.638 | 99.99th=[11600] 00:08:15.638 bw ( KiB/s): min=64904, max=78768, per=96.33%, avg=73544.00, stdev=7536.87, samples=3 00:08:15.638 iops : min=16226, max=19692, avg=18386.00, stdev=1884.22, samples=3 00:08:15.638 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:08:15.638 lat (msec) : 2=0.44%, 4=77.19%, 10=22.29%, 20=0.03%, 50=0.01% 00:08:15.638 cpu : usr=98.35%, sys=0.25%, ctx=26, majf=0, minf=608 00:08:15.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:15.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:15.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:15.638 issued rwts: total=38222,38192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:15.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:15.638 00:08:15.638 Run status group 0 (all jobs): 00:08:15.638 READ: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=149MiB (157MB), run=2001-2001msec 00:08:15.638 WRITE: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=149MiB (156MB), run=2001-2001msec 00:08:15.638 ----------------------------------------------------- 00:08:15.638 Suppressions used: 00:08:15.638 count bytes template 00:08:15.638 1 32 /usr/src/fio/parse.c 00:08:15.638 1 8 libtcmalloc_minimal.so 00:08:15.638 ----------------------------------------------------- 00:08:15.638 00:08:15.638 23:45:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:15.638 23:45:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:15.638 23:45:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:15.638 23:45:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:15.638 23:45:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:15.638 23:45:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:15.638 23:45:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:15.638 23:45:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:15.638 23:45:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:15.638 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:15.638 fio-3.35 00:08:15.638 Starting 1 thread 00:08:33.762 00:08:33.762 test: (groupid=0, jobs=1): err= 0: pid=64263: Thu Dec 5 23:46:04 2024 00:08:33.762 read: IOPS=16.8k, BW=65.6MiB/s (68.8MB/s)(131MiB/2001msec) 00:08:33.762 slat (nsec): min=4513, max=91196, avg=6256.87, stdev=3104.45 00:08:33.762 clat (usec): min=526, max=12050, avg=3773.90, stdev=1146.15 00:08:33.762 lat (usec): min=532, max=12092, avg=3780.15, stdev=1147.25 00:08:33.762 clat percentiles (usec): 00:08:33.762 | 1.00th=[ 2278], 5.00th=[ 2704], 10.00th=[ 2868], 20.00th=[ 2999], 00:08:33.762 | 30.00th=[ 3130], 40.00th=[ 3261], 50.00th=[ 3392], 60.00th=[ 3556], 00:08:33.762 | 70.00th=[ 3818], 80.00th=[ 4424], 90.00th=[ 5473], 95.00th=[ 6194], 00:08:33.762 | 99.00th=[ 7767], 99.50th=[ 8586], 99.90th=[10290], 99.95th=[10945], 00:08:33.762 | 99.99th=[11994] 00:08:33.762 bw ( KiB/s): min=64064, max=67704, per=97.89%, avg=65752.00, stdev=1834.30, samples=3 00:08:33.762 iops : min=16016, max=16926, avg=16438.00, stdev=458.58, samples=3 00:08:33.762 write: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(131MiB/2001msec); 0 zone resets 00:08:33.762 slat (nsec): min=4630, max=95853, avg=6566.34, stdev=3035.53 00:08:33.762 clat (usec): min=516, max=11891, avg=3816.98, stdev=1143.06 00:08:33.762 lat (usec): min=522, max=11902, avg=3823.55, stdev=1144.14 00:08:33.762 clat percentiles (usec): 00:08:33.762 | 1.00th=[ 2343], 5.00th=[ 2769], 10.00th=[ 2900], 20.00th=[ 3064], 00:08:33.762 | 30.00th=[ 3195], 40.00th=[ 3294], 50.00th=[ 3425], 60.00th=[ 3589], 00:08:33.762 | 70.00th=[ 3851], 80.00th=[ 4424], 90.00th=[ 5473], 95.00th=[ 6259], 00:08:33.762 | 99.00th=[ 7832], 99.50th=[ 8455], 99.90th=[10290], 99.95th=[10814], 00:08:33.762 | 99.99th=[11600] 00:08:33.762 bw ( KiB/s): min=63368, max=67392, per=97.48%, avg=65573.33, stdev=2039.68, samples=3 00:08:33.762 iops : min=15842, max=16848, avg=16393.33, stdev=509.92, samples=3 00:08:33.762 lat (usec) : 750=0.03%, 1000=0.01% 00:08:33.762 lat (msec) : 2=0.48%, 4=73.22%, 10=26.12%, 20=0.14% 00:08:33.762 cpu : usr=98.50%, sys=0.10%, ctx=4, majf=0, minf=608 00:08:33.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:33.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:33.762 issued rwts: total=33603,33652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:33.762 00:08:33.762 Run status group 0 (all jobs): 00:08:33.762 READ: bw=65.6MiB/s (68.8MB/s), 65.6MiB/s-65.6MiB/s (68.8MB/s-68.8MB/s), io=131MiB (138MB), run=2001-2001msec 00:08:33.762 WRITE: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=131MiB (138MB), run=2001-2001msec 00:08:33.762 ----------------------------------------------------- 00:08:33.762 Suppressions used: 00:08:33.762 count bytes template 00:08:33.762 1 32 /usr/src/fio/parse.c 00:08:33.762 1 8 libtcmalloc_minimal.so 00:08:33.762 ----------------------------------------------------- 00:08:33.762 00:08:33.762 23:46:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:33.762 23:46:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:33.763 23:46:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:33.763 23:46:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:33.763 23:46:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:33.763 23:46:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:33.763 23:46:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:33.763 23:46:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:33.763 23:46:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:33.763 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:33.763 fio-3.35 00:08:33.763 Starting 1 thread 00:08:37.963 00:08:37.963 test: (groupid=0, jobs=1): err= 0: pid=64327: Thu Dec 5 23:46:10 2024 00:08:37.963 read: IOPS=15.7k, BW=61.4MiB/s (64.3MB/s)(123MiB/2001msec) 00:08:37.963 slat (usec): min=4, max=106, avg= 6.67, stdev= 3.51 00:08:37.963 clat (usec): min=1318, max=17554, avg=4034.35, stdev=1319.14 00:08:37.963 lat (usec): min=1324, max=17559, avg=4041.02, stdev=1320.50 00:08:37.963 clat percentiles (usec): 00:08:37.963 | 1.00th=[ 2311], 5.00th=[ 2835], 10.00th=[ 2966], 20.00th=[ 3097], 00:08:37.963 | 30.00th=[ 3228], 40.00th=[ 3359], 50.00th=[ 3490], 60.00th=[ 3752], 00:08:37.963 | 70.00th=[ 4293], 80.00th=[ 5080], 90.00th=[ 5866], 95.00th=[ 6718], 00:08:37.963 | 99.00th=[ 8291], 99.50th=[ 8979], 99.90th=[11338], 99.95th=[13304], 00:08:37.963 | 99.99th=[15270] 00:08:37.963 bw ( KiB/s): min=61808, max=65208, per=100.00%, avg=63498.67, stdev=1700.08, samples=3 00:08:37.963 iops : min=15452, max=16302, avg=15874.67, stdev=425.02, samples=3 00:08:37.963 write: IOPS=15.7k, BW=61.4MiB/s (64.4MB/s)(123MiB/2001msec); 0 zone resets 00:08:37.963 slat (usec): min=4, max=412, avg= 7.00, stdev= 4.19 00:08:37.963 clat (usec): min=1278, max=15309, avg=4078.04, stdev=1326.71 00:08:37.963 lat (usec): min=1284, max=15315, avg=4085.05, stdev=1328.10 00:08:37.963 clat percentiles (usec): 00:08:37.963 | 1.00th=[ 2343], 5.00th=[ 2868], 10.00th=[ 2999], 20.00th=[ 3130], 00:08:37.963 | 30.00th=[ 3261], 40.00th=[ 3392], 50.00th=[ 3523], 60.00th=[ 3785], 00:08:37.963 | 70.00th=[ 4359], 80.00th=[ 5145], 90.00th=[ 5866], 95.00th=[ 6783], 00:08:37.963 | 99.00th=[ 8356], 99.50th=[ 8979], 99.90th=[11600], 99.95th=[13435], 00:08:37.963 | 99.99th=[14746] 00:08:37.963 bw ( KiB/s): min=61272, max=64544, per=100.00%, avg=63189.33, stdev=1707.03, samples=3 00:08:37.963 iops : min=15318, max=16136, avg=15797.33, stdev=426.76, samples=3 00:08:37.963 lat (msec) : 2=0.50%, 4=64.56%, 10=34.68%, 20=0.26% 00:08:37.963 cpu : usr=98.65%, sys=0.05%, ctx=2, majf=0, minf=608 00:08:37.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:37.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.963 issued rwts: total=31435,31465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.963 00:08:37.963 Run status group 0 (all jobs): 00:08:37.963 READ: bw=61.4MiB/s (64.3MB/s), 61.4MiB/s-61.4MiB/s (64.3MB/s-64.3MB/s), io=123MiB (129MB), run=2001-2001msec 00:08:37.963 WRITE: bw=61.4MiB/s (64.4MB/s), 61.4MiB/s-61.4MiB/s (64.4MB/s-64.4MB/s), io=123MiB (129MB), run=2001-2001msec 00:08:37.963 ----------------------------------------------------- 00:08:37.963 Suppressions used: 00:08:37.963 count bytes template 00:08:37.963 1 32 /usr/src/fio/parse.c 00:08:37.963 1 8 libtcmalloc_minimal.so 00:08:37.963 ----------------------------------------------------- 00:08:37.963 00:08:37.963 23:46:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:37.963 23:46:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:37.963 23:46:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:37.963 23:46:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:38.224 23:46:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:38.224 23:46:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:38.484 23:46:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:38.484 23:46:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:38.484 23:46:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:38.761 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:38.761 fio-3.35 00:08:38.761 Starting 1 thread 00:08:42.044 00:08:42.044 test: (groupid=0, jobs=1): err= 0: pid=64389: Thu Dec 5 23:46:14 2024 00:08:42.044 read: IOPS=7180, BW=28.0MiB/s (29.4MB/s)(57.3MiB/2042msec) 00:08:42.044 slat (nsec): min=4226, max=96673, avg=5838.43, stdev=2836.06 00:08:42.044 clat (usec): min=1064, max=50185, avg=5698.42, stdev=3988.75 00:08:42.044 lat (usec): min=1069, max=50190, avg=5704.26, stdev=3988.82 00:08:42.044 clat percentiles (usec): 00:08:42.044 | 1.00th=[ 2024], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2900], 00:08:42.044 | 30.00th=[ 3195], 40.00th=[ 3490], 50.00th=[ 3851], 60.00th=[ 4883], 00:08:42.044 | 70.00th=[ 6980], 80.00th=[ 9110], 90.00th=[11207], 95.00th=[12518], 00:08:42.044 | 99.00th=[14746], 99.50th=[15926], 99.90th=[49021], 99.95th=[49546], 00:08:42.044 | 99.99th=[50070] 00:08:42.044 bw ( KiB/s): min=12736, max=72480, per=100.00%, avg=29250.00, stdev=28906.59, samples=4 00:08:42.044 iops : min= 3184, max=18120, avg=7312.50, stdev=7226.65, samples=4 00:08:42.044 write: IOPS=7147, BW=27.9MiB/s (29.3MB/s)(57.0MiB/2042msec); 0 zone resets 00:08:42.044 slat (nsec): min=4335, max=77365, avg=6059.77, stdev=2585.19 00:08:42.044 clat (usec): min=1037, max=90202, avg=12137.30, stdev=17288.27 00:08:42.044 lat (usec): min=1042, max=90206, avg=12143.36, stdev=17288.18 00:08:42.044 clat percentiles (usec): 00:08:42.044 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2638], 20.00th=[ 2966], 00:08:42.044 | 30.00th=[ 3228], 40.00th=[ 3490], 50.00th=[ 3884], 60.00th=[ 4948], 00:08:42.044 | 70.00th=[ 7832], 80.00th=[11731], 90.00th=[49546], 95.00th=[53740], 00:08:42.044 | 99.00th=[59507], 99.50th=[62653], 99.90th=[85459], 99.95th=[87557], 00:08:42.044 | 99.99th=[89654] 00:08:42.044 bw ( KiB/s): min=12184, max=72456, per=100.00%, avg=29014.00, stdev=29024.93, samples=4 00:08:42.044 iops : min= 3046, max=18114, avg=7253.50, stdev=7256.23, samples=4 00:08:42.044 lat (msec) : 2=0.87%, 4=51.41%, 10=27.84%, 20=11.95%, 50=3.03% 00:08:42.044 lat (msec) : 100=4.89% 00:08:42.044 cpu : usr=99.17%, sys=0.00%, ctx=4, majf=0, minf=606 00:08:42.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:08:42.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.044 issued rwts: total=14663,14596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.044 00:08:42.044 Run status group 0 (all jobs): 00:08:42.044 READ: bw=28.0MiB/s (29.4MB/s), 28.0MiB/s-28.0MiB/s (29.4MB/s-29.4MB/s), io=57.3MiB (60.1MB), run=2042-2042msec 00:08:42.044 WRITE: bw=27.9MiB/s (29.3MB/s), 27.9MiB/s-27.9MiB/s (29.3MB/s-29.3MB/s), io=57.0MiB (59.8MB), run=2042-2042msec 00:08:42.044 ----------------------------------------------------- 00:08:42.044 Suppressions used: 00:08:42.044 count bytes template 00:08:42.044 1 32 /usr/src/fio/parse.c 00:08:42.044 1 8 libtcmalloc_minimal.so 00:08:42.044 ----------------------------------------------------- 00:08:42.044 00:08:42.044 ************************************ 00:08:42.044 END TEST nvme_fio 00:08:42.044 ************************************ 00:08:42.044 23:46:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:42.044 23:46:14 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:42.044 00:08:42.044 real 0m35.045s 00:08:42.045 user 0m15.813s 00:08:42.045 sys 0m35.936s 00:08:42.045 23:46:14 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.045 23:46:14 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:42.045 ************************************ 00:08:42.045 END TEST nvme 00:08:42.045 ************************************ 00:08:42.045 00:08:42.045 real 1m46.024s 00:08:42.045 user 3m38.337s 00:08:42.045 sys 0m47.314s 00:08:42.045 23:46:14 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.045 23:46:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.045 23:46:14 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:42.045 23:46:14 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:42.045 23:46:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.045 23:46:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.045 23:46:14 -- common/autotest_common.sh@10 -- # set +x 00:08:42.045 ************************************ 00:08:42.045 START TEST nvme_scc 00:08:42.045 ************************************ 00:08:42.045 23:46:14 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:42.045 * Looking for test storage... 00:08:42.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:42.045 23:46:14 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.045 23:46:14 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:42.045 23:46:14 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:42.045 23:46:14 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:42.045 23:46:14 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.045 23:46:14 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:42.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.045 --rc genhtml_branch_coverage=1 00:08:42.045 --rc genhtml_function_coverage=1 00:08:42.045 --rc genhtml_legend=1 00:08:42.045 --rc geninfo_all_blocks=1 00:08:42.045 --rc geninfo_unexecuted_blocks=1 00:08:42.045 00:08:42.045 ' 00:08:42.045 23:46:14 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:42.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.045 --rc genhtml_branch_coverage=1 00:08:42.045 --rc genhtml_function_coverage=1 00:08:42.045 --rc genhtml_legend=1 00:08:42.045 --rc geninfo_all_blocks=1 00:08:42.045 --rc geninfo_unexecuted_blocks=1 00:08:42.045 00:08:42.045 ' 00:08:42.045 23:46:14 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:42.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.045 --rc genhtml_branch_coverage=1 00:08:42.045 --rc genhtml_function_coverage=1 00:08:42.045 --rc genhtml_legend=1 00:08:42.045 --rc geninfo_all_blocks=1 00:08:42.045 --rc geninfo_unexecuted_blocks=1 00:08:42.045 00:08:42.045 ' 00:08:42.045 23:46:14 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:42.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.045 --rc genhtml_branch_coverage=1 00:08:42.045 --rc genhtml_function_coverage=1 00:08:42.045 --rc genhtml_legend=1 00:08:42.045 --rc geninfo_all_blocks=1 00:08:42.045 --rc geninfo_unexecuted_blocks=1 00:08:42.045 00:08:42.045 ' 00:08:42.045 23:46:14 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.045 23:46:14 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.045 23:46:14 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.045 23:46:14 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.045 23:46:14 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.045 23:46:14 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:42.045 23:46:14 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:42.045 23:46:14 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:42.045 23:46:14 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.045 23:46:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:42.045 23:46:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:42.045 23:46:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:42.045 23:46:14 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:42.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:42.613 Waiting for block devices as requested 00:08:42.613 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:42.613 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:42.873 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:42.873 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.170 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:48.170 23:46:20 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:48.170 23:46:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:48.170 23:46:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:48.170 23:46:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:48.170 23:46:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.170 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.171 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.172 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.173 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:48.174 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.175 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:48.176 23:46:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:48.176 23:46:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:48.176 23:46:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:48.176 23:46:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:48.176 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:48.177 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:48.178 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:08:48.179 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:08:48.180 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.181 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:48.182 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:48.183 23:46:20 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:48.183 23:46:20 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:48.183 23:46:20 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:48.183 23:46:20 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:48.183 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.184 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:48.185 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:48.186 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:48.187 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.188 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:08:48.456 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:48.457 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:08:48.458 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:48.459 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:48.460 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.461 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:48.462 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.463 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:48.464 23:46:21 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:48.464 23:46:21 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:48.464 23:46:21 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:48.464 23:46:21 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.464 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.465 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.466 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:48.467 23:46:21 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:48.467 23:46:21 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:48.467 23:46:21 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:48.467 23:46:21 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:48.467 23:46:21 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:49.040 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.612 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.612 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.612 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.612 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.612 23:46:22 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:49.612 23:46:22 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:49.612 23:46:22 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.612 23:46:22 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:49.612 ************************************ 00:08:49.612 START TEST nvme_simple_copy 00:08:49.612 ************************************ 00:08:49.612 23:46:22 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:50.187 Initializing NVMe Controllers 00:08:50.187 Attaching to 0000:00:10.0 00:08:50.187 Controller supports SCC. Attached to 0000:00:10.0 00:08:50.187 Namespace ID: 1 size: 6GB 00:08:50.187 Initialization complete. 00:08:50.187 00:08:50.187 Controller QEMU NVMe Ctrl (12340 ) 00:08:50.187 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:50.187 Namespace Block Size:4096 00:08:50.187 Writing LBAs 0 to 63 with Random Data 00:08:50.187 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:50.187 LBAs matching Written Data: 64 00:08:50.187 00:08:50.187 real 0m0.288s 00:08:50.187 user 0m0.114s 00:08:50.187 sys 0m0.072s 00:08:50.187 ************************************ 00:08:50.187 23:46:22 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.187 23:46:22 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:50.187 END TEST nvme_simple_copy 00:08:50.187 ************************************ 00:08:50.187 ************************************ 00:08:50.187 END TEST nvme_scc 00:08:50.187 ************************************ 00:08:50.187 00:08:50.187 real 0m8.103s 00:08:50.187 user 0m1.188s 00:08:50.187 sys 0m1.466s 00:08:50.187 23:46:22 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.187 23:46:22 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:50.187 23:46:22 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:50.187 23:46:22 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:50.187 23:46:22 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:50.187 23:46:22 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:50.187 23:46:22 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:50.187 23:46:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.187 23:46:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.187 23:46:22 -- common/autotest_common.sh@10 -- # set +x 00:08:50.187 ************************************ 00:08:50.187 START TEST nvme_fdp 00:08:50.187 ************************************ 00:08:50.187 23:46:22 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:08:50.187 * Looking for test storage... 00:08:50.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:50.187 23:46:22 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.187 23:46:22 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.187 23:46:22 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.187 23:46:22 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:50.187 23:46:22 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.187 23:46:22 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.187 --rc genhtml_branch_coverage=1 00:08:50.187 --rc genhtml_function_coverage=1 00:08:50.187 --rc genhtml_legend=1 00:08:50.187 --rc geninfo_all_blocks=1 00:08:50.187 --rc geninfo_unexecuted_blocks=1 00:08:50.187 00:08:50.187 ' 00:08:50.187 23:46:22 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.187 --rc genhtml_branch_coverage=1 00:08:50.187 --rc genhtml_function_coverage=1 00:08:50.187 --rc genhtml_legend=1 00:08:50.187 --rc geninfo_all_blocks=1 00:08:50.187 --rc geninfo_unexecuted_blocks=1 00:08:50.187 00:08:50.187 ' 00:08:50.187 23:46:22 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.187 --rc genhtml_branch_coverage=1 00:08:50.187 --rc genhtml_function_coverage=1 00:08:50.187 --rc genhtml_legend=1 00:08:50.187 --rc geninfo_all_blocks=1 00:08:50.187 --rc geninfo_unexecuted_blocks=1 00:08:50.187 00:08:50.187 ' 00:08:50.187 23:46:22 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.187 --rc genhtml_branch_coverage=1 00:08:50.187 --rc genhtml_function_coverage=1 00:08:50.187 --rc genhtml_legend=1 00:08:50.187 --rc geninfo_all_blocks=1 00:08:50.187 --rc geninfo_unexecuted_blocks=1 00:08:50.187 00:08:50.187 ' 00:08:50.187 23:46:22 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:50.187 23:46:22 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:50.187 23:46:22 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:50.187 23:46:22 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:50.187 23:46:22 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.187 23:46:22 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.187 23:46:22 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.187 23:46:22 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.187 23:46:22 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.187 23:46:22 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:50.187 23:46:22 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.187 23:46:22 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:50.187 23:46:22 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:50.187 23:46:22 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:50.187 23:46:22 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:50.449 23:46:22 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:50.449 23:46:22 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:50.449 23:46:22 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:50.449 23:46:22 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:50.449 23:46:22 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:50.449 23:46:22 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.449 23:46:22 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:50.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:50.712 Waiting for block devices as requested 00:08:50.973 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:50.973 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:50.973 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.235 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:56.544 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:56.544 23:46:28 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:56.544 23:46:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:56.544 23:46:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:56.544 23:46:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:56.544 23:46:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.544 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:56.545 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.546 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.547 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:56.548 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.549 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:56.550 23:46:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:56.550 23:46:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:56.550 23:46:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:56.550 23:46:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:56.550 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.551 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:56.552 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:08:56.553 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.554 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.555 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:56.556 23:46:29 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:56.556 23:46:29 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:56.556 23:46:29 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:56.556 23:46:29 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.556 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.557 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:56.558 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:08:56.559 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.560 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.561 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.562 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:08:56.563 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.564 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.565 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.566 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.567 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.568 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:56.569 23:46:29 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:56.569 23:46:29 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:56.569 23:46:29 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:56.569 23:46:29 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.569 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.888 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.889 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.890 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:56.891 23:46:29 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:08:56.891 23:46:29 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:08:56.892 23:46:29 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:08:56.892 23:46:29 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:08:56.892 23:46:29 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:08:56.892 23:46:29 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:57.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:57.719 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:57.719 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:57.719 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:57.978 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:57.978 23:46:30 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:57.978 23:46:30 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:57.978 23:46:30 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.978 23:46:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:57.978 ************************************ 00:08:57.978 START TEST nvme_flexible_data_placement 00:08:57.978 ************************************ 00:08:57.978 23:46:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:58.237 Initializing NVMe Controllers 00:08:58.237 Attaching to 0000:00:13.0 00:08:58.237 Controller supports FDP Attached to 0000:00:13.0 00:08:58.237 Namespace ID: 1 Endurance Group ID: 1 00:08:58.237 Initialization complete. 00:08:58.237 00:08:58.237 ================================== 00:08:58.237 == FDP tests for Namespace: #01 == 00:08:58.237 ================================== 00:08:58.237 00:08:58.237 Get Feature: FDP: 00:08:58.237 ================= 00:08:58.237 Enabled: Yes 00:08:58.237 FDP configuration Index: 0 00:08:58.237 00:08:58.237 FDP configurations log page 00:08:58.237 =========================== 00:08:58.237 Number of FDP configurations: 1 00:08:58.237 Version: 0 00:08:58.238 Size: 112 00:08:58.238 FDP Configuration Descriptor: 0 00:08:58.238 Descriptor Size: 96 00:08:58.238 Reclaim Group Identifier format: 2 00:08:58.238 FDP Volatile Write Cache: Not Present 00:08:58.238 FDP Configuration: Valid 00:08:58.238 Vendor Specific Size: 0 00:08:58.238 Number of Reclaim Groups: 2 00:08:58.238 Number of Recalim Unit Handles: 8 00:08:58.238 Max Placement Identifiers: 128 00:08:58.238 Number of Namespaces Suppprted: 256 00:08:58.238 Reclaim unit Nominal Size: 6000000 bytes 00:08:58.238 Estimated Reclaim Unit Time Limit: Not Reported 00:08:58.238 RUH Desc #000: RUH Type: Initially Isolated 00:08:58.238 RUH Desc #001: RUH Type: Initially Isolated 00:08:58.238 RUH Desc #002: RUH Type: Initially Isolated 00:08:58.238 RUH Desc #003: RUH Type: Initially Isolated 00:08:58.238 RUH Desc #004: RUH Type: Initially Isolated 00:08:58.238 RUH Desc #005: RUH Type: Initially Isolated 00:08:58.238 RUH Desc #006: RUH Type: Initially Isolated 00:08:58.238 RUH Desc #007: RUH Type: Initially Isolated 00:08:58.238 00:08:58.238 FDP reclaim unit handle usage log page 00:08:58.238 ====================================== 00:08:58.238 Number of Reclaim Unit Handles: 8 00:08:58.238 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:58.238 RUH Usage Desc #001: RUH Attributes: Unused 00:08:58.238 RUH Usage Desc #002: RUH Attributes: Unused 00:08:58.238 RUH Usage Desc #003: RUH Attributes: Unused 00:08:58.238 RUH Usage Desc #004: RUH Attributes: Unused 00:08:58.238 RUH Usage Desc #005: RUH Attributes: Unused 00:08:58.238 RUH Usage Desc #006: RUH Attributes: Unused 00:08:58.238 RUH Usage Desc #007: RUH Attributes: Unused 00:08:58.238 00:08:58.238 FDP statistics log page 00:08:58.238 ======================= 00:08:58.238 Host bytes with metadata written: 852815872 00:08:58.238 Media bytes with metadata written: 852971520 00:08:58.238 Media bytes erased: 0 00:08:58.238 00:08:58.238 FDP Reclaim unit handle status 00:08:58.238 ============================== 00:08:58.238 Number of RUHS descriptors: 2 00:08:58.238 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000032b1 00:08:58.238 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:08:58.238 00:08:58.238 FDP write on placement id: 0 success 00:08:58.238 00:08:58.238 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:08:58.238 00:08:58.238 IO mgmt send: RUH update for Placement ID: #0 Success 00:08:58.238 00:08:58.238 Get Feature: FDP Events for Placement handle: #0 00:08:58.238 ======================== 00:08:58.238 Number of FDP Events: 6 00:08:58.238 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:08:58.238 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:08:58.238 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:08:58.238 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:08:58.238 FDP Event: #4 Type: Media Reallocated Enabled: No 00:08:58.238 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:08:58.238 00:08:58.238 FDP events log page 00:08:58.238 =================== 00:08:58.238 Number of FDP events: 1 00:08:58.238 FDP Event #0: 00:08:58.238 Event Type: RU Not Written to Capacity 00:08:58.238 Placement Identifier: Valid 00:08:58.238 NSID: Valid 00:08:58.238 Location: Valid 00:08:58.238 Placement Identifier: 0 00:08:58.238 Event Timestamp: 8 00:08:58.238 Namespace Identifier: 1 00:08:58.238 Reclaim Group Identifier: 0 00:08:58.238 Reclaim Unit Handle Identifier: 0 00:08:58.238 00:08:58.238 FDP test passed 00:08:58.238 00:08:58.238 real 0m0.249s 00:08:58.238 user 0m0.067s 00:08:58.238 sys 0m0.078s 00:08:58.238 ************************************ 00:08:58.238 END TEST nvme_flexible_data_placement 00:08:58.238 ************************************ 00:08:58.238 23:46:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.238 23:46:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:08:58.238 ************************************ 00:08:58.238 END TEST nvme_fdp 00:08:58.238 ************************************ 00:08:58.238 00:08:58.238 real 0m8.122s 00:08:58.238 user 0m1.191s 00:08:58.238 sys 0m1.584s 00:08:58.238 23:46:30 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.238 23:46:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:58.238 23:46:30 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:08:58.238 23:46:30 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:58.238 23:46:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.238 23:46:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.238 23:46:30 -- common/autotest_common.sh@10 -- # set +x 00:08:58.238 ************************************ 00:08:58.238 START TEST nvme_rpc 00:08:58.238 ************************************ 00:08:58.238 23:46:30 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:58.498 * Looking for test storage... 00:08:58.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:58.498 23:46:30 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:58.498 23:46:30 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:58.498 23:46:30 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:58.498 23:46:31 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.498 23:46:31 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.499 23:46:31 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.499 --rc genhtml_branch_coverage=1 00:08:58.499 --rc genhtml_function_coverage=1 00:08:58.499 --rc genhtml_legend=1 00:08:58.499 --rc geninfo_all_blocks=1 00:08:58.499 --rc geninfo_unexecuted_blocks=1 00:08:58.499 00:08:58.499 ' 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.499 --rc genhtml_branch_coverage=1 00:08:58.499 --rc genhtml_function_coverage=1 00:08:58.499 --rc genhtml_legend=1 00:08:58.499 --rc geninfo_all_blocks=1 00:08:58.499 --rc geninfo_unexecuted_blocks=1 00:08:58.499 00:08:58.499 ' 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.499 --rc genhtml_branch_coverage=1 00:08:58.499 --rc genhtml_function_coverage=1 00:08:58.499 --rc genhtml_legend=1 00:08:58.499 --rc geninfo_all_blocks=1 00:08:58.499 --rc geninfo_unexecuted_blocks=1 00:08:58.499 00:08:58.499 ' 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.499 --rc genhtml_branch_coverage=1 00:08:58.499 --rc genhtml_function_coverage=1 00:08:58.499 --rc genhtml_legend=1 00:08:58.499 --rc geninfo_all_blocks=1 00:08:58.499 --rc geninfo_unexecuted_blocks=1 00:08:58.499 00:08:58.499 ' 00:08:58.499 23:46:31 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.499 23:46:31 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:58.499 23:46:31 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:08:58.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.499 23:46:31 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65772 00:08:58.499 23:46:31 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:08:58.499 23:46:31 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65772 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65772 ']' 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.499 23:46:31 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.499 23:46:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.760 [2024-12-05 23:46:31.212684] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:08:58.760 [2024-12-05 23:46:31.212834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65772 ] 00:08:58.760 [2024-12-05 23:46:31.375328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.021 [2024-12-05 23:46:31.513816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.021 [2024-12-05 23:46:31.513877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.597 23:46:32 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.597 23:46:32 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:59.597 23:46:32 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:08:59.858 Nvme0n1 00:08:59.858 23:46:32 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:08:59.858 23:46:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:00.119 request: 00:09:00.119 { 00:09:00.119 "bdev_name": "Nvme0n1", 00:09:00.119 "filename": "non_existing_file", 00:09:00.119 "method": "bdev_nvme_apply_firmware", 00:09:00.119 "req_id": 1 00:09:00.119 } 00:09:00.119 Got JSON-RPC error response 00:09:00.119 response: 00:09:00.119 { 00:09:00.119 "code": -32603, 00:09:00.119 "message": "open file failed." 00:09:00.119 } 00:09:00.119 23:46:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:00.119 23:46:32 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:00.119 23:46:32 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:00.380 23:46:32 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:00.380 23:46:32 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65772 00:09:00.380 23:46:32 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65772 ']' 00:09:00.380 23:46:32 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65772 00:09:00.380 23:46:32 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:00.380 23:46:32 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.380 23:46:32 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65772 00:09:00.380 killing process with pid 65772 00:09:00.380 23:46:32 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.380 23:46:32 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.380 23:46:32 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65772' 00:09:00.380 23:46:32 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65772 00:09:00.380 23:46:32 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65772 00:09:02.295 ************************************ 00:09:02.295 END TEST nvme_rpc 00:09:02.295 ************************************ 00:09:02.295 00:09:02.295 real 0m3.700s 00:09:02.295 user 0m6.875s 00:09:02.295 sys 0m0.648s 00:09:02.295 23:46:34 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.295 23:46:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.295 23:46:34 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:02.295 23:46:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.295 23:46:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.295 23:46:34 -- common/autotest_common.sh@10 -- # set +x 00:09:02.295 ************************************ 00:09:02.295 START TEST nvme_rpc_timeouts 00:09:02.295 ************************************ 00:09:02.295 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:02.295 * Looking for test storage... 00:09:02.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:02.295 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:02.295 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:02.295 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:02.295 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:02.295 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:02.296 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.296 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:02.296 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.296 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.296 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.296 23:46:34 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:02.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.296 --rc genhtml_branch_coverage=1 00:09:02.296 --rc genhtml_function_coverage=1 00:09:02.296 --rc genhtml_legend=1 00:09:02.296 --rc geninfo_all_blocks=1 00:09:02.296 --rc geninfo_unexecuted_blocks=1 00:09:02.296 00:09:02.296 ' 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:02.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.296 --rc genhtml_branch_coverage=1 00:09:02.296 --rc genhtml_function_coverage=1 00:09:02.296 --rc genhtml_legend=1 00:09:02.296 --rc geninfo_all_blocks=1 00:09:02.296 --rc geninfo_unexecuted_blocks=1 00:09:02.296 00:09:02.296 ' 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:02.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.296 --rc genhtml_branch_coverage=1 00:09:02.296 --rc genhtml_function_coverage=1 00:09:02.296 --rc genhtml_legend=1 00:09:02.296 --rc geninfo_all_blocks=1 00:09:02.296 --rc geninfo_unexecuted_blocks=1 00:09:02.296 00:09:02.296 ' 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:02.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.296 --rc genhtml_branch_coverage=1 00:09:02.296 --rc genhtml_function_coverage=1 00:09:02.296 --rc genhtml_legend=1 00:09:02.296 --rc geninfo_all_blocks=1 00:09:02.296 --rc geninfo_unexecuted_blocks=1 00:09:02.296 00:09:02.296 ' 00:09:02.296 23:46:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.296 23:46:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65844 00:09:02.296 23:46:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65844 00:09:02.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.296 23:46:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65876 00:09:02.296 23:46:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:02.296 23:46:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65876 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65876 ']' 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.296 23:46:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:02.296 23:46:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:02.296 [2024-12-05 23:46:34.911710] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:09:02.296 [2024-12-05 23:46:34.912499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65876 ] 00:09:02.556 [2024-12-05 23:46:35.076363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.556 [2024-12-05 23:46:35.214031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.556 [2024-12-05 23:46:35.214050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.498 23:46:35 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.498 23:46:35 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:03.498 23:46:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:03.498 Checking default timeout settings: 00:09:03.498 23:46:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:03.761 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:03.761 Making settings changes with rpc: 00:09:03.761 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:04.024 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:04.024 Check default vs. modified settings: 00:09:04.024 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65844 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65844 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:04.286 Setting action_on_timeout is changed as expected. 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65844 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65844 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:04.286 Setting timeout_us is changed as expected. 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65844 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65844 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:04.286 Setting timeout_admin_us is changed as expected. 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65844 /tmp/settings_modified_65844 00:09:04.286 23:46:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65876 00:09:04.286 23:46:36 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65876 ']' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65876 00:09:04.286 23:46:36 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:04.286 23:46:36 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65876 00:09:04.286 killing process with pid 65876 00:09:04.286 23:46:36 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.286 23:46:36 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65876' 00:09:04.286 23:46:36 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65876 00:09:04.286 23:46:36 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65876 00:09:06.212 RPC TIMEOUT SETTING TEST PASSED. 00:09:06.212 23:46:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:06.212 ************************************ 00:09:06.212 END TEST nvme_rpc_timeouts 00:09:06.212 ************************************ 00:09:06.212 00:09:06.212 real 0m3.951s 00:09:06.212 user 0m7.548s 00:09:06.212 sys 0m0.666s 00:09:06.212 23:46:38 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.212 23:46:38 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:06.212 23:46:38 -- spdk/autotest.sh@239 -- # uname -s 00:09:06.212 23:46:38 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:06.212 23:46:38 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:06.212 23:46:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.212 23:46:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.212 23:46:38 -- common/autotest_common.sh@10 -- # set +x 00:09:06.212 ************************************ 00:09:06.212 START TEST sw_hotplug 00:09:06.212 ************************************ 00:09:06.212 23:46:38 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:06.212 * Looking for test storage... 00:09:06.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:06.212 23:46:38 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.212 23:46:38 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.212 23:46:38 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.212 23:46:38 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.212 23:46:38 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:06.212 23:46:38 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.212 23:46:38 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.212 --rc genhtml_branch_coverage=1 00:09:06.212 --rc genhtml_function_coverage=1 00:09:06.212 --rc genhtml_legend=1 00:09:06.212 --rc geninfo_all_blocks=1 00:09:06.212 --rc geninfo_unexecuted_blocks=1 00:09:06.212 00:09:06.212 ' 00:09:06.212 23:46:38 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.212 --rc genhtml_branch_coverage=1 00:09:06.212 --rc genhtml_function_coverage=1 00:09:06.212 --rc genhtml_legend=1 00:09:06.212 --rc geninfo_all_blocks=1 00:09:06.212 --rc geninfo_unexecuted_blocks=1 00:09:06.212 00:09:06.212 ' 00:09:06.212 23:46:38 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.212 --rc genhtml_branch_coverage=1 00:09:06.212 --rc genhtml_function_coverage=1 00:09:06.212 --rc genhtml_legend=1 00:09:06.212 --rc geninfo_all_blocks=1 00:09:06.212 --rc geninfo_unexecuted_blocks=1 00:09:06.212 00:09:06.212 ' 00:09:06.212 23:46:38 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.212 --rc genhtml_branch_coverage=1 00:09:06.212 --rc genhtml_function_coverage=1 00:09:06.212 --rc genhtml_legend=1 00:09:06.212 --rc geninfo_all_blocks=1 00:09:06.212 --rc geninfo_unexecuted_blocks=1 00:09:06.212 00:09:06.212 ' 00:09:06.212 23:46:38 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:06.474 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:06.737 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:06.737 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:06.737 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:06.737 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:06.737 23:46:39 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:06.737 23:46:39 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:06.737 23:46:39 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:06.737 23:46:39 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:06.737 23:46:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:06.738 23:46:39 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:06.738 23:46:39 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:06.738 23:46:39 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:06.738 23:46:39 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:07.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:07.311 Waiting for block devices as requested 00:09:07.311 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:07.311 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:07.572 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:07.572 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.862 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:12.862 23:46:45 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:12.862 23:46:45 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:13.122 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:13.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:13.122 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:13.380 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:13.640 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.640 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:13.901 23:46:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66746 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:13.901 23:46:46 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:13.901 23:46:46 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:13.901 23:46:46 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:13.901 23:46:46 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:13.901 23:46:46 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:13.901 23:46:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:14.162 Initializing NVMe Controllers 00:09:14.162 Attaching to 0000:00:10.0 00:09:14.162 Attaching to 0000:00:11.0 00:09:14.162 Attached to 0000:00:11.0 00:09:14.162 Attached to 0000:00:10.0 00:09:14.162 Initialization complete. Starting I/O... 00:09:14.162 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:14.162 QEMU NVMe Ctrl (12340 ): 2 I/Os completed (+2) 00:09:14.162 00:09:15.103 QEMU NVMe Ctrl (12341 ): 2284 I/Os completed (+2284) 00:09:15.103 QEMU NVMe Ctrl (12340 ): 2286 I/Os completed (+2284) 00:09:15.103 00:09:16.046 QEMU NVMe Ctrl (12341 ): 5232 I/Os completed (+2948) 00:09:16.046 QEMU NVMe Ctrl (12340 ): 5250 I/Os completed (+2964) 00:09:16.046 00:09:16.989 QEMU NVMe Ctrl (12341 ): 8080 I/Os completed (+2848) 00:09:16.989 QEMU NVMe Ctrl (12340 ): 8133 I/Os completed (+2883) 00:09:16.989 00:09:18.385 QEMU NVMe Ctrl (12341 ): 10812 I/Os completed (+2732) 00:09:18.385 QEMU NVMe Ctrl (12340 ): 10889 I/Os completed (+2756) 00:09:18.385 00:09:19.358 QEMU NVMe Ctrl (12341 ): 13617 I/Os completed (+2805) 00:09:19.358 QEMU NVMe Ctrl (12340 ): 13691 I/Os completed (+2802) 00:09:19.358 00:09:19.927 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:19.927 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:19.927 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:19.927 [2024-12-05 23:46:52.478573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:19.927 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:19.927 [2024-12-05 23:46:52.479852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.479901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.479918] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.479935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:19.927 [2024-12-05 23:46:52.481755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.481833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.481864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.481962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:19.927 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:19.927 [2024-12-05 23:46:52.497776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:19.927 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:19.927 [2024-12-05 23:46:52.498896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.499556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.499935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.500180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:19.927 [2024-12-05 23:46:52.505243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.505495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.505671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 [2024-12-05 23:46:52.505766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:19.927 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:19.927 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:19.927 EAL: Scan for (pci) bus failed. 00:09:19.927 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:19.927 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:19.927 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:19.927 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:20.187 00:09:20.187 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:20.187 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:20.187 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:20.187 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:20.187 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:20.187 Attaching to 0000:00:10.0 00:09:20.187 Attached to 0000:00:10.0 00:09:20.187 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:20.187 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:20.187 23:46:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:20.187 Attaching to 0000:00:11.0 00:09:20.187 Attached to 0000:00:11.0 00:09:21.125 QEMU NVMe Ctrl (12340 ): 3111 I/Os completed (+3111) 00:09:21.125 QEMU NVMe Ctrl (12341 ): 2808 I/Os completed (+2808) 00:09:21.125 00:09:22.054 QEMU NVMe Ctrl (12340 ): 6142 I/Os completed (+3031) 00:09:22.054 QEMU NVMe Ctrl (12341 ): 5912 I/Os completed (+3104) 00:09:22.054 00:09:22.983 QEMU NVMe Ctrl (12340 ): 9237 I/Os completed (+3095) 00:09:22.983 QEMU NVMe Ctrl (12341 ): 9143 I/Os completed (+3231) 00:09:22.983 00:09:24.365 QEMU NVMe Ctrl (12340 ): 12267 I/Os completed (+3030) 00:09:24.365 QEMU NVMe Ctrl (12341 ): 12323 I/Os completed (+3180) 00:09:24.365 00:09:25.299 QEMU NVMe Ctrl (12340 ): 15372 I/Os completed (+3105) 00:09:25.299 QEMU NVMe Ctrl (12341 ): 15541 I/Os completed (+3218) 00:09:25.299 00:09:26.231 QEMU NVMe Ctrl (12340 ): 18484 I/Os completed (+3112) 00:09:26.231 QEMU NVMe Ctrl (12341 ): 18662 I/Os completed (+3121) 00:09:26.231 00:09:27.216 QEMU NVMe Ctrl (12340 ): 21676 I/Os completed (+3192) 00:09:27.216 QEMU NVMe Ctrl (12341 ): 21855 I/Os completed (+3193) 00:09:27.216 00:09:28.152 QEMU NVMe Ctrl (12340 ): 24804 I/Os completed (+3128) 00:09:28.152 QEMU NVMe Ctrl (12341 ): 25027 I/Os completed (+3172) 00:09:28.152 00:09:29.085 QEMU NVMe Ctrl (12340 ): 27872 I/Os completed (+3068) 00:09:29.085 QEMU NVMe Ctrl (12341 ): 28135 I/Os completed (+3108) 00:09:29.085 00:09:30.024 QEMU NVMe Ctrl (12340 ): 30959 I/Os completed (+3087) 00:09:30.024 QEMU NVMe Ctrl (12341 ): 31230 I/Os completed (+3095) 00:09:30.024 00:09:31.412 QEMU NVMe Ctrl (12340 ): 33878 I/Os completed (+2919) 00:09:31.412 QEMU NVMe Ctrl (12341 ): 34339 I/Os completed (+3109) 00:09:31.412 00:09:31.986 QEMU NVMe Ctrl (12340 ): 36986 I/Os completed (+3108) 00:09:31.986 QEMU NVMe Ctrl (12341 ): 37525 I/Os completed (+3186) 00:09:31.986 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:32.248 [2024-12-05 23:47:04.821867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:32.248 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:32.248 [2024-12-05 23:47:04.824143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.824193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.824213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.824231] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:32.248 [2024-12-05 23:47:04.826139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.826203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.826233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.826260] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:32.248 [2024-12-05 23:47:04.845126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:32.248 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:32.248 [2024-12-05 23:47:04.846221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.846333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.846414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.846446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:32.248 [2024-12-05 23:47:04.848297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.848399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.848470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 [2024-12-05 23:47:04.848501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:32.248 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:32.512 23:47:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:32.512 23:47:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:32.512 23:47:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:32.512 23:47:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:32.512 23:47:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:32.512 Attaching to 0000:00:10.0 00:09:32.512 Attached to 0000:00:10.0 00:09:32.512 23:47:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:32.512 23:47:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:32.512 23:47:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:32.512 Attaching to 0000:00:11.0 00:09:32.512 Attached to 0000:00:11.0 00:09:33.079 QEMU NVMe Ctrl (12340 ): 2176 I/Os completed (+2176) 00:09:33.079 QEMU NVMe Ctrl (12341 ): 1922 I/Os completed (+1922) 00:09:33.079 00:09:34.016 QEMU NVMe Ctrl (12340 ): 5306 I/Os completed (+3130) 00:09:34.016 QEMU NVMe Ctrl (12341 ): 5004 I/Os completed (+3082) 00:09:34.016 00:09:35.012 QEMU NVMe Ctrl (12340 ): 8189 I/Os completed (+2883) 00:09:35.012 QEMU NVMe Ctrl (12341 ): 7895 I/Os completed (+2891) 00:09:35.012 00:09:36.408 QEMU NVMe Ctrl (12340 ): 11424 I/Os completed (+3235) 00:09:36.408 QEMU NVMe Ctrl (12341 ): 11122 I/Os completed (+3227) 00:09:36.408 00:09:37.340 QEMU NVMe Ctrl (12340 ): 14624 I/Os completed (+3200) 00:09:37.340 QEMU NVMe Ctrl (12341 ): 14322 I/Os completed (+3200) 00:09:37.340 00:09:38.272 QEMU NVMe Ctrl (12340 ): 17728 I/Os completed (+3104) 00:09:38.272 QEMU NVMe Ctrl (12341 ): 17439 I/Os completed (+3117) 00:09:38.272 00:09:39.209 QEMU NVMe Ctrl (12340 ): 20870 I/Os completed (+3142) 00:09:39.209 QEMU NVMe Ctrl (12341 ): 20590 I/Os completed (+3151) 00:09:39.209 00:09:40.154 QEMU NVMe Ctrl (12340 ): 23964 I/Os completed (+3094) 00:09:40.154 QEMU NVMe Ctrl (12341 ): 23666 I/Os completed (+3076) 00:09:40.154 00:09:41.097 QEMU NVMe Ctrl (12340 ): 27430 I/Os completed (+3466) 00:09:41.097 QEMU NVMe Ctrl (12341 ): 27275 I/Os completed (+3609) 00:09:41.097 00:09:42.037 QEMU NVMe Ctrl (12340 ): 30541 I/Os completed (+3111) 00:09:42.037 QEMU NVMe Ctrl (12341 ): 30411 I/Os completed (+3136) 00:09:42.037 00:09:43.489 QEMU NVMe Ctrl (12340 ): 33658 I/Os completed (+3117) 00:09:43.489 QEMU NVMe Ctrl (12341 ): 33572 I/Os completed (+3161) 00:09:43.489 00:09:44.054 QEMU NVMe Ctrl (12340 ): 36787 I/Os completed (+3129) 00:09:44.054 QEMU NVMe Ctrl (12341 ): 36662 I/Os completed (+3090) 00:09:44.054 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:44.620 [2024-12-05 23:47:17.085439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:44.620 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:44.620 [2024-12-05 23:47:17.086484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.086524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.086539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.086553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:44.620 [2024-12-05 23:47:17.090055] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.090097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.090111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.090125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:44.620 [2024-12-05 23:47:17.108377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:44.620 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:44.620 [2024-12-05 23:47:17.109358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.109394] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.109409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.109421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:44.620 [2024-12-05 23:47:17.110769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.110803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.110817] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 [2024-12-05 23:47:17.110827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:44.620 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:44.620 Attaching to 0000:00:10.0 00:09:44.620 Attached to 0000:00:10.0 00:09:44.878 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:44.878 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:44.878 23:47:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:44.878 Attaching to 0000:00:11.0 00:09:44.878 Attached to 0000:00:11.0 00:09:44.878 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:44.878 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:44.878 [2024-12-05 23:47:17.367171] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:09:57.068 23:47:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:57.068 23:47:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:57.068 23:47:29 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.89 00:09:57.068 23:47:29 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.89 00:09:57.068 23:47:29 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:09:57.068 23:47:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.89 00:09:57.068 23:47:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.89 2 00:09:57.068 remove_attach_helper took 42.89s to complete (handling 2 nvme drive(s)) 23:47:29 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:03.751 23:47:35 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66746 00:10:03.751 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66746) - No such process 00:10:03.751 23:47:35 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66746 00:10:03.751 23:47:35 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:03.751 23:47:35 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:03.751 23:47:35 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:03.751 23:47:35 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67296 00:10:03.751 23:47:35 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:03.751 23:47:35 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67296 00:10:03.751 23:47:35 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67296 ']' 00:10:03.751 23:47:35 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.751 23:47:35 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.751 23:47:35 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.751 23:47:35 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.751 23:47:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:03.751 23:47:35 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:03.751 [2024-12-05 23:47:35.444326] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:10:03.751 [2024-12-05 23:47:35.444445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67296 ] 00:10:03.751 [2024-12-05 23:47:35.605275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.751 [2024-12-05 23:47:35.700321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.751 23:47:36 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.751 23:47:36 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:03.751 23:47:36 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:03.751 23:47:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.751 23:47:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:03.751 23:47:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.751 23:47:36 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:03.751 23:47:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:03.751 23:47:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:03.751 23:47:36 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:03.751 23:47:36 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:03.751 23:47:36 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:03.751 23:47:36 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:03.751 23:47:36 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:03.751 23:47:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:03.751 23:47:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:03.751 23:47:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:03.751 23:47:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:03.751 23:47:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:10.356 23:47:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.356 23:47:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:10.356 23:47:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:10.356 [2024-12-05 23:47:42.381752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:10.356 [2024-12-05 23:47:42.383124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.356 [2024-12-05 23:47:42.383161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.356 [2024-12-05 23:47:42.383174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.356 [2024-12-05 23:47:42.383202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.356 [2024-12-05 23:47:42.383210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.356 [2024-12-05 23:47:42.383219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.356 [2024-12-05 23:47:42.383226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.356 [2024-12-05 23:47:42.383234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.356 [2024-12-05 23:47:42.383241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.356 [2024-12-05 23:47:42.383252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.356 [2024-12-05 23:47:42.383259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.356 [2024-12-05 23:47:42.383267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.356 [2024-12-05 23:47:42.781745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:10.356 [2024-12-05 23:47:42.783039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.356 [2024-12-05 23:47:42.783072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.356 [2024-12-05 23:47:42.783084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.356 [2024-12-05 23:47:42.783101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.356 [2024-12-05 23:47:42.783110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.356 [2024-12-05 23:47:42.783117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.356 [2024-12-05 23:47:42.783125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.356 [2024-12-05 23:47:42.783131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.356 [2024-12-05 23:47:42.783139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.356 [2024-12-05 23:47:42.783146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:10.356 [2024-12-05 23:47:42.783154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:10.356 [2024-12-05 23:47:42.783160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:10.356 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:10.357 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:10.357 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:10.357 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:10.357 23:47:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.357 23:47:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:10.357 23:47:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.357 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:10.357 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:10.357 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:10.357 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:10.357 23:47:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:10.357 23:47:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:10.357 23:47:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:10.357 23:47:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:10.357 23:47:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:10.357 23:47:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:10.615 23:47:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:10.615 23:47:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:10.615 23:47:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:22.812 23:47:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.812 23:47:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:22.812 23:47:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:22.812 23:47:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.812 23:47:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:22.812 23:47:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:22.812 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:22.812 [2024-12-05 23:47:55.281951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:22.812 [2024-12-05 23:47:55.283257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.812 [2024-12-05 23:47:55.283357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.812 [2024-12-05 23:47:55.283420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.812 [2024-12-05 23:47:55.283474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.812 [2024-12-05 23:47:55.283493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.812 [2024-12-05 23:47:55.283544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.812 [2024-12-05 23:47:55.283595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.812 [2024-12-05 23:47:55.283613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.812 [2024-12-05 23:47:55.283635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.812 [2024-12-05 23:47:55.283700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.812 [2024-12-05 23:47:55.283718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.812 [2024-12-05 23:47:55.283743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:23.150 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:23.150 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:23.150 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:23.150 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:23.150 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:23.150 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:23.150 23:47:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.150 23:47:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:23.151 23:47:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.151 [2024-12-05 23:47:55.781948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:23.151 [2024-12-05 23:47:55.783238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.151 [2024-12-05 23:47:55.783339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:23.151 [2024-12-05 23:47:55.783408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:23.151 [2024-12-05 23:47:55.783466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.151 [2024-12-05 23:47:55.783486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:23.151 [2024-12-05 23:47:55.783539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:23.151 [2024-12-05 23:47:55.783614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.151 [2024-12-05 23:47:55.783631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:23.151 [2024-12-05 23:47:55.783655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:23.151 [2024-12-05 23:47:55.783734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.151 [2024-12-05 23:47:55.783772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:23.151 [2024-12-05 23:47:55.783795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:23.151 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:23.151 23:47:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:23.729 23:47:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.729 23:47:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:23.729 23:47:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.729 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:23.991 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:23.991 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:23.991 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.991 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.991 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:23.991 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:23.991 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:23.991 23:47:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:36.204 23:48:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.204 23:48:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:36.204 23:48:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:36.204 23:48:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.204 23:48:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:36.204 23:48:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:36.204 23:48:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:36.204 [2024-12-05 23:48:08.682174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:36.204 [2024-12-05 23:48:08.683581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.204 [2024-12-05 23:48:08.683616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.204 [2024-12-05 23:48:08.683626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.204 [2024-12-05 23:48:08.683643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.204 [2024-12-05 23:48:08.683651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.204 [2024-12-05 23:48:08.683661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.204 [2024-12-05 23:48:08.683668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.204 [2024-12-05 23:48:08.683676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.204 [2024-12-05 23:48:08.683682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.204 [2024-12-05 23:48:08.683690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.204 [2024-12-05 23:48:08.683697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.204 [2024-12-05 23:48:08.683705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.462 [2024-12-05 23:48:09.082163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:36.463 [2024-12-05 23:48:09.083307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.463 [2024-12-05 23:48:09.083337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.463 [2024-12-05 23:48:09.083348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.463 [2024-12-05 23:48:09.083363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.463 [2024-12-05 23:48:09.083371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.463 [2024-12-05 23:48:09.083378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.463 [2024-12-05 23:48:09.083387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.463 [2024-12-05 23:48:09.083393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.463 [2024-12-05 23:48:09.083402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.463 [2024-12-05 23:48:09.083409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.463 [2024-12-05 23:48:09.083416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.463 [2024-12-05 23:48:09.083423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.463 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:36.463 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:36.463 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:36.463 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:36.463 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:36.721 23:48:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.721 23:48:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:36.721 23:48:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.721 23:48:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.17 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.17 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:10:48.935 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:48.935 23:48:21 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:48.935 23:48:21 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.501 23:48:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.501 23:48:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.501 23:48:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:55.501 23:48:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:55.501 [2024-12-05 23:48:27.588068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:55.501 [2024-12-05 23:48:27.589296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.501 [2024-12-05 23:48:27.589394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.501 [2024-12-05 23:48:27.589451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.501 [2024-12-05 23:48:27.589514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.501 [2024-12-05 23:48:27.589533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.501 [2024-12-05 23:48:27.589593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.501 [2024-12-05 23:48:27.589620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.501 [2024-12-05 23:48:27.589662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.501 [2024-12-05 23:48:27.589687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.501 [2024-12-05 23:48:27.589733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.501 [2024-12-05 23:48:27.589751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.501 [2024-12-05 23:48:27.589803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.501 [2024-12-05 23:48:27.988066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:55.501 [2024-12-05 23:48:27.990359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.501 [2024-12-05 23:48:27.990470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.501 [2024-12-05 23:48:27.990538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.501 [2024-12-05 23:48:27.990600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.501 [2024-12-05 23:48:27.990623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.501 [2024-12-05 23:48:27.990671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.501 [2024-12-05 23:48:27.990698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.501 [2024-12-05 23:48:27.990739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.501 [2024-12-05 23:48:27.990766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.501 [2024-12-05 23:48:27.990813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.501 [2024-12-05 23:48:27.990834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.501 [2024-12-05 23:48:27.990856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.501 23:48:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.501 23:48:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.501 23:48:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.501 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:55.759 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:55.759 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.759 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.759 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.759 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:55.759 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:55.759 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.759 23:48:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.956 23:48:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.956 23:48:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.956 23:48:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.956 23:48:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.956 23:48:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.956 23:48:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:07.956 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:07.956 [2024-12-05 23:48:40.488276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:07.956 [2024-12-05 23:48:40.489540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.956 [2024-12-05 23:48:40.489641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.956 [2024-12-05 23:48:40.489698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.956 [2024-12-05 23:48:40.489734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.956 [2024-12-05 23:48:40.489776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.956 [2024-12-05 23:48:40.489803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.956 [2024-12-05 23:48:40.489870] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.956 [2024-12-05 23:48:40.489890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.956 [2024-12-05 23:48:40.489913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.956 [2024-12-05 23:48:40.489940] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.956 [2024-12-05 23:48:40.489957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.956 [2024-12-05 23:48:40.490041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.215 [2024-12-05 23:48:40.888279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:08.215 [2024-12-05 23:48:40.889572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.215 [2024-12-05 23:48:40.889676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.215 [2024-12-05 23:48:40.889737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.215 [2024-12-05 23:48:40.889797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.215 [2024-12-05 23:48:40.889819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.215 [2024-12-05 23:48:40.889843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.215 [2024-12-05 23:48:40.889868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.215 [2024-12-05 23:48:40.889917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.215 [2024-12-05 23:48:40.889946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.215 [2024-12-05 23:48:40.889978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.215 [2024-12-05 23:48:40.890027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.215 [2024-12-05 23:48:40.890054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.474 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:08.474 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.474 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.474 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.474 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.474 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.474 23:48:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.474 23:48:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.474 23:48:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.474 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:08.474 23:48:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:08.474 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.474 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.474 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:08.474 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:08.474 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.474 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.474 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.474 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:08.732 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:08.732 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.732 23:48:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.973 23:48:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.973 23:48:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.973 23:48:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.973 [2024-12-05 23:48:53.288506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:20.973 [2024-12-05 23:48:53.289909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.973 [2024-12-05 23:48:53.290037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.973 [2024-12-05 23:48:53.290102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.973 [2024-12-05 23:48:53.290164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.973 [2024-12-05 23:48:53.290182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.973 [2024-12-05 23:48:53.290209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.973 [2024-12-05 23:48:53.290234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.973 [2024-12-05 23:48:53.290253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.973 [2024-12-05 23:48:53.290276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.973 [2024-12-05 23:48:53.290440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.973 [2024-12-05 23:48:53.290448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.973 [2024-12-05 23:48:53.290456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.973 23:48:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.973 23:48:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.973 23:48:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:20.973 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:21.232 [2024-12-05 23:48:53.788508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:21.232 [2024-12-05 23:48:53.789775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.232 [2024-12-05 23:48:53.790086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.232 [2024-12-05 23:48:53.790105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.232 [2024-12-05 23:48:53.790122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.232 [2024-12-05 23:48:53.790131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.232 [2024-12-05 23:48:53.790139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.232 [2024-12-05 23:48:53.790147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.232 [2024-12-05 23:48:53.790155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.232 [2024-12-05 23:48:53.790163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.232 [2024-12-05 23:48:53.790170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.232 [2024-12-05 23:48:53.790180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.232 [2024-12-05 23:48:53.790187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.232 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:21.232 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.232 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.232 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.232 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.232 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.232 23:48:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.232 23:48:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.232 23:48:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.232 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:21.232 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:21.489 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.489 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.489 23:48:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:21.489 23:48:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:21.489 23:48:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:21.489 23:48:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.489 23:48:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.489 23:48:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:21.489 23:48:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:21.489 23:48:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:21.489 23:48:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.66 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.66 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.66 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.66 2 00:11:33.693 remove_attach_helper took 44.66s to complete (handling 2 nvme drive(s)) 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:33.693 23:49:06 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67296 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67296 ']' 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67296 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67296 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67296' 00:11:33.693 killing process with pid 67296 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67296 00:11:33.693 23:49:06 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67296 00:11:35.069 23:49:07 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:35.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:35.331 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:35.331 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:35.590 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:35.590 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:35.590 00:11:35.590 real 2m29.481s 00:11:35.590 user 1m50.853s 00:11:35.590 sys 0m17.121s 00:11:35.590 23:49:08 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.590 ************************************ 00:11:35.590 23:49:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.590 END TEST sw_hotplug 00:11:35.590 ************************************ 00:11:35.590 23:49:08 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:35.590 23:49:08 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:35.590 23:49:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:35.590 23:49:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.590 23:49:08 -- common/autotest_common.sh@10 -- # set +x 00:11:35.590 ************************************ 00:11:35.590 START TEST nvme_xnvme 00:11:35.590 ************************************ 00:11:35.590 23:49:08 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:35.852 * Looking for test storage... 00:11:35.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.852 23:49:08 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.852 --rc genhtml_branch_coverage=1 00:11:35.852 --rc genhtml_function_coverage=1 00:11:35.852 --rc genhtml_legend=1 00:11:35.852 --rc geninfo_all_blocks=1 00:11:35.852 --rc geninfo_unexecuted_blocks=1 00:11:35.852 00:11:35.852 ' 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.852 --rc genhtml_branch_coverage=1 00:11:35.852 --rc genhtml_function_coverage=1 00:11:35.852 --rc genhtml_legend=1 00:11:35.852 --rc geninfo_all_blocks=1 00:11:35.852 --rc geninfo_unexecuted_blocks=1 00:11:35.852 00:11:35.852 ' 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.852 --rc genhtml_branch_coverage=1 00:11:35.852 --rc genhtml_function_coverage=1 00:11:35.852 --rc genhtml_legend=1 00:11:35.852 --rc geninfo_all_blocks=1 00:11:35.852 --rc geninfo_unexecuted_blocks=1 00:11:35.852 00:11:35.852 ' 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.852 --rc genhtml_branch_coverage=1 00:11:35.852 --rc genhtml_function_coverage=1 00:11:35.852 --rc genhtml_legend=1 00:11:35.852 --rc geninfo_all_blocks=1 00:11:35.852 --rc geninfo_unexecuted_blocks=1 00:11:35.852 00:11:35.852 ' 00:11:35.852 23:49:08 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:35.852 23:49:08 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:35.852 23:49:08 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:35.852 23:49:08 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:35.852 23:49:08 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:35.852 23:49:08 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:35.853 23:49:08 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:35.853 23:49:08 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:35.853 23:49:08 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:35.853 #define SPDK_CONFIG_H 00:11:35.853 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:35.853 #define SPDK_CONFIG_APPS 1 00:11:35.853 #define SPDK_CONFIG_ARCH native 00:11:35.853 #define SPDK_CONFIG_ASAN 1 00:11:35.853 #undef SPDK_CONFIG_AVAHI 00:11:35.853 #undef SPDK_CONFIG_CET 00:11:35.853 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:35.853 #define SPDK_CONFIG_COVERAGE 1 00:11:35.853 #define SPDK_CONFIG_CROSS_PREFIX 00:11:35.853 #undef SPDK_CONFIG_CRYPTO 00:11:35.853 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:35.853 #undef SPDK_CONFIG_CUSTOMOCF 00:11:35.853 #undef SPDK_CONFIG_DAOS 00:11:35.853 #define SPDK_CONFIG_DAOS_DIR 00:11:35.853 #define SPDK_CONFIG_DEBUG 1 00:11:35.853 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:35.853 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:35.853 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:35.853 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:35.853 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:35.853 #undef SPDK_CONFIG_DPDK_UADK 00:11:35.853 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:35.853 #define SPDK_CONFIG_EXAMPLES 1 00:11:35.853 #undef SPDK_CONFIG_FC 00:11:35.853 #define SPDK_CONFIG_FC_PATH 00:11:35.853 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:35.853 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:35.853 #define SPDK_CONFIG_FSDEV 1 00:11:35.853 #undef SPDK_CONFIG_FUSE 00:11:35.853 #undef SPDK_CONFIG_FUZZER 00:11:35.853 #define SPDK_CONFIG_FUZZER_LIB 00:11:35.853 #undef SPDK_CONFIG_GOLANG 00:11:35.853 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:35.853 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:35.853 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:35.853 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:35.853 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:35.853 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:35.853 #undef SPDK_CONFIG_HAVE_LZ4 00:11:35.853 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:35.853 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:35.853 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:35.853 #define SPDK_CONFIG_IDXD 1 00:11:35.853 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:35.853 #undef SPDK_CONFIG_IPSEC_MB 00:11:35.853 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:35.853 #define SPDK_CONFIG_ISAL 1 00:11:35.853 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:35.853 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:35.854 #define SPDK_CONFIG_LIBDIR 00:11:35.854 #undef SPDK_CONFIG_LTO 00:11:35.854 #define SPDK_CONFIG_MAX_LCORES 128 00:11:35.854 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:35.854 #define SPDK_CONFIG_NVME_CUSE 1 00:11:35.854 #undef SPDK_CONFIG_OCF 00:11:35.854 #define SPDK_CONFIG_OCF_PATH 00:11:35.854 #define SPDK_CONFIG_OPENSSL_PATH 00:11:35.854 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:35.854 #define SPDK_CONFIG_PGO_DIR 00:11:35.854 #undef SPDK_CONFIG_PGO_USE 00:11:35.854 #define SPDK_CONFIG_PREFIX /usr/local 00:11:35.854 #undef SPDK_CONFIG_RAID5F 00:11:35.854 #undef SPDK_CONFIG_RBD 00:11:35.854 #define SPDK_CONFIG_RDMA 1 00:11:35.854 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:35.854 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:35.854 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:35.854 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:35.854 #define SPDK_CONFIG_SHARED 1 00:11:35.854 #undef SPDK_CONFIG_SMA 00:11:35.854 #define SPDK_CONFIG_TESTS 1 00:11:35.854 #undef SPDK_CONFIG_TSAN 00:11:35.854 #define SPDK_CONFIG_UBLK 1 00:11:35.854 #define SPDK_CONFIG_UBSAN 1 00:11:35.854 #undef SPDK_CONFIG_UNIT_TESTS 00:11:35.854 #undef SPDK_CONFIG_URING 00:11:35.854 #define SPDK_CONFIG_URING_PATH 00:11:35.854 #undef SPDK_CONFIG_URING_ZNS 00:11:35.854 #undef SPDK_CONFIG_USDT 00:11:35.854 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:35.854 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:35.854 #undef SPDK_CONFIG_VFIO_USER 00:11:35.854 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:35.854 #define SPDK_CONFIG_VHOST 1 00:11:35.854 #define SPDK_CONFIG_VIRTIO 1 00:11:35.854 #undef SPDK_CONFIG_VTUNE 00:11:35.854 #define SPDK_CONFIG_VTUNE_DIR 00:11:35.854 #define SPDK_CONFIG_WERROR 1 00:11:35.854 #define SPDK_CONFIG_WPDK_DIR 00:11:35.854 #define SPDK_CONFIG_XNVME 1 00:11:35.854 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:35.854 23:49:08 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.854 23:49:08 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.854 23:49:08 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.854 23:49:08 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.854 23:49:08 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.854 23:49:08 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.854 23:49:08 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.854 23:49:08 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.854 23:49:08 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:35.854 23:49:08 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:35.854 23:49:08 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:35.854 23:49:08 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:35.855 23:49:08 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68640 ]] 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68640 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.p7SH8Q 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.p7SH8Q/tests/xnvme /tmp/spdk.p7SH8Q 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974786048 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593198592 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260621312 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265384960 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974786048 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593198592 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265237504 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97303855104 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2398924800 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:35.856 * Looking for test storage... 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974786048 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:35.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.856 23:49:08 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.857 23:49:08 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:35.857 23:49:08 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.857 23:49:08 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.857 --rc genhtml_branch_coverage=1 00:11:35.857 --rc genhtml_function_coverage=1 00:11:35.857 --rc genhtml_legend=1 00:11:35.857 --rc geninfo_all_blocks=1 00:11:35.857 --rc geninfo_unexecuted_blocks=1 00:11:35.857 00:11:35.857 ' 00:11:35.857 23:49:08 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.857 --rc genhtml_branch_coverage=1 00:11:35.857 --rc genhtml_function_coverage=1 00:11:35.857 --rc genhtml_legend=1 00:11:35.857 --rc geninfo_all_blocks=1 00:11:35.857 --rc geninfo_unexecuted_blocks=1 00:11:35.857 00:11:35.857 ' 00:11:35.857 23:49:08 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.857 --rc genhtml_branch_coverage=1 00:11:35.857 --rc genhtml_function_coverage=1 00:11:35.857 --rc genhtml_legend=1 00:11:35.857 --rc geninfo_all_blocks=1 00:11:35.857 --rc geninfo_unexecuted_blocks=1 00:11:35.857 00:11:35.857 ' 00:11:35.857 23:49:08 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.857 --rc genhtml_branch_coverage=1 00:11:35.857 --rc genhtml_function_coverage=1 00:11:35.857 --rc genhtml_legend=1 00:11:35.857 --rc geninfo_all_blocks=1 00:11:35.857 --rc geninfo_unexecuted_blocks=1 00:11:35.857 00:11:35.857 ' 00:11:35.857 23:49:08 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.857 23:49:08 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.118 23:49:08 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.118 23:49:08 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.118 23:49:08 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.118 23:49:08 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.119 23:49:08 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.119 23:49:08 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.119 23:49:08 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:36.119 23:49:08 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:36.119 23:49:08 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:36.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:36.380 Waiting for block devices as requested 00:11:36.380 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:36.380 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:36.640 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:36.640 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:41.976 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:41.976 23:49:14 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:41.976 23:49:14 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:41.976 23:49:14 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:42.237 23:49:14 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:42.237 23:49:14 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:42.237 No valid GPT data, bailing 00:11:42.237 23:49:14 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:42.237 23:49:14 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:42.237 23:49:14 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:42.237 23:49:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:42.237 23:49:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.237 23:49:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.237 23:49:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:42.238 ************************************ 00:11:42.238 START TEST xnvme_rpc 00:11:42.238 ************************************ 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69030 00:11:42.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69030 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69030 ']' 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.238 23:49:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.238 [2024-12-05 23:49:14.866844] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:11:42.238 [2024-12-05 23:49:14.866983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69030 ] 00:11:42.498 [2024-12-05 23:49:15.022369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.498 [2024-12-05 23:49:15.119744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.069 xnvme_bdev 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:43.069 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69030 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69030 ']' 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69030 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69030 00:11:43.328 killing process with pid 69030 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69030' 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69030 00:11:43.328 23:49:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69030 00:11:44.709 ************************************ 00:11:44.709 END TEST xnvme_rpc 00:11:44.709 ************************************ 00:11:44.709 00:11:44.709 real 0m2.586s 00:11:44.709 user 0m2.686s 00:11:44.709 sys 0m0.338s 00:11:44.709 23:49:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.709 23:49:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.709 23:49:17 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:44.709 23:49:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:44.709 23:49:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.709 23:49:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:44.709 ************************************ 00:11:44.709 START TEST xnvme_bdevperf 00:11:44.968 ************************************ 00:11:44.968 23:49:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:11:44.968 23:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:11:44.968 23:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:11:44.968 23:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:44.968 23:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:11:44.968 23:49:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:44.968 23:49:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:44.968 23:49:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 { 00:11:44.968 "subsystems": [ 00:11:44.968 { 00:11:44.968 "subsystem": "bdev", 00:11:44.968 "config": [ 00:11:44.968 { 00:11:44.968 "params": { 00:11:44.968 "io_mechanism": "libaio", 00:11:44.968 "conserve_cpu": false, 00:11:44.968 "filename": "/dev/nvme0n1", 00:11:44.968 "name": "xnvme_bdev" 00:11:44.968 }, 00:11:44.968 "method": "bdev_xnvme_create" 00:11:44.968 }, 00:11:44.968 { 00:11:44.968 "method": "bdev_wait_for_examine" 00:11:44.968 } 00:11:44.968 ] 00:11:44.968 } 00:11:44.968 ] 00:11:44.968 } 00:11:44.968 [2024-12-05 23:49:17.482802] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:11:44.968 [2024-12-05 23:49:17.483066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69104 ] 00:11:44.968 [2024-12-05 23:49:17.643165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.227 [2024-12-05 23:49:17.736631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.486 Running I/O for 5 seconds... 00:11:47.400 20787.00 IOPS, 81.20 MiB/s [2024-12-05T23:49:21.050Z] 27946.00 IOPS, 109.16 MiB/s [2024-12-05T23:49:22.435Z] 31158.67 IOPS, 121.71 MiB/s [2024-12-05T23:49:23.010Z] 34206.50 IOPS, 133.62 MiB/s 00:11:50.301 Latency(us) 00:11:50.301 [2024-12-05T23:49:23.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.301 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:50.301 xnvme_bdev : 5.00 35845.48 140.02 0.00 0.00 1781.22 171.72 13308.85 00:11:50.301 [2024-12-05T23:49:23.010Z] =================================================================================================================== 00:11:50.301 [2024-12-05T23:49:23.010Z] Total : 35845.48 140.02 0.00 0.00 1781.22 171.72 13308.85 00:11:51.245 23:49:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:51.245 23:49:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:11:51.245 23:49:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:51.245 23:49:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:51.245 23:49:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:51.245 { 00:11:51.245 "subsystems": [ 00:11:51.245 { 00:11:51.245 "subsystem": "bdev", 00:11:51.245 "config": [ 00:11:51.245 { 00:11:51.245 "params": { 00:11:51.245 "io_mechanism": "libaio", 00:11:51.245 "conserve_cpu": false, 00:11:51.245 "filename": "/dev/nvme0n1", 00:11:51.245 "name": "xnvme_bdev" 00:11:51.245 }, 00:11:51.245 "method": "bdev_xnvme_create" 00:11:51.245 }, 00:11:51.245 { 00:11:51.245 "method": "bdev_wait_for_examine" 00:11:51.245 } 00:11:51.245 ] 00:11:51.245 } 00:11:51.245 ] 00:11:51.245 } 00:11:51.245 [2024-12-05 23:49:23.809006] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:11:51.245 [2024-12-05 23:49:23.809148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69179 ] 00:11:51.505 [2024-12-05 23:49:23.965822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.505 [2024-12-05 23:49:24.091806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.777 Running I/O for 5 seconds... 00:11:54.110 15912.00 IOPS, 62.16 MiB/s [2024-12-05T23:49:27.761Z] 9186.50 IOPS, 35.88 MiB/s [2024-12-05T23:49:28.703Z] 6977.00 IOPS, 27.25 MiB/s [2024-12-05T23:49:29.645Z] 6953.25 IOPS, 27.16 MiB/s [2024-12-05T23:49:29.645Z] 13188.00 IOPS, 51.52 MiB/s 00:11:56.936 Latency(us) 00:11:56.936 [2024-12-05T23:49:29.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.936 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:11:56.936 xnvme_bdev : 5.00 13201.78 51.57 0.00 0.00 4843.21 64.20 40128.20 00:11:56.936 [2024-12-05T23:49:29.645Z] =================================================================================================================== 00:11:56.936 [2024-12-05T23:49:29.645Z] Total : 13201.78 51.57 0.00 0.00 4843.21 64.20 40128.20 00:11:57.897 00:11:57.897 real 0m12.822s 00:11:57.897 user 0m7.116s 00:11:57.897 sys 0m4.431s 00:11:57.897 23:49:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.897 23:49:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:57.897 ************************************ 00:11:57.897 END TEST xnvme_bdevperf 00:11:57.897 ************************************ 00:11:57.897 23:49:30 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:11:57.897 23:49:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:57.897 23:49:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.897 23:49:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:57.897 ************************************ 00:11:57.897 START TEST xnvme_fio_plugin 00:11:57.897 ************************************ 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:57.897 23:49:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:11:57.897 { 00:11:57.897 "subsystems": [ 00:11:57.897 { 00:11:57.897 "subsystem": "bdev", 00:11:57.897 "config": [ 00:11:57.897 { 00:11:57.897 "params": { 00:11:57.897 "io_mechanism": "libaio", 00:11:57.897 "conserve_cpu": false, 00:11:57.897 "filename": "/dev/nvme0n1", 00:11:57.897 "name": "xnvme_bdev" 00:11:57.897 }, 00:11:57.897 "method": "bdev_xnvme_create" 00:11:57.897 }, 00:11:57.897 { 00:11:57.897 "method": "bdev_wait_for_examine" 00:11:57.897 } 00:11:57.897 ] 00:11:57.897 } 00:11:57.897 ] 00:11:57.897 } 00:11:57.897 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:11:57.897 fio-3.35 00:11:57.897 Starting 1 thread 00:12:04.483 00:12:04.483 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69294: Thu Dec 5 23:49:36 2024 00:12:04.483 read: IOPS=44.9k, BW=175MiB/s (184MB/s)(877MiB/5002msec) 00:12:04.483 slat (usec): min=4, max=1490, avg=17.61, stdev=50.98 00:12:04.483 clat (usec): min=29, max=13304, avg=1003.78, stdev=581.59 00:12:04.483 lat (usec): min=78, max=13329, avg=1021.39, stdev=582.01 00:12:04.483 clat percentiles (usec): 00:12:04.483 | 1.00th=[ 196], 5.00th=[ 306], 10.00th=[ 408], 20.00th=[ 562], 00:12:04.483 | 30.00th=[ 676], 40.00th=[ 791], 50.00th=[ 898], 60.00th=[ 1020], 00:12:04.483 | 70.00th=[ 1156], 80.00th=[ 1352], 90.00th=[ 1680], 95.00th=[ 2040], 00:12:04.483 | 99.00th=[ 2966], 99.50th=[ 3359], 99.90th=[ 4883], 99.95th=[ 6128], 00:12:04.483 | 99.99th=[ 8848] 00:12:04.483 bw ( KiB/s): min=136888, max=217760, per=100.00%, avg=181161.78, stdev=24619.81, samples=9 00:12:04.483 iops : min=34222, max=54440, avg=45290.44, stdev=6154.95, samples=9 00:12:04.483 lat (usec) : 50=0.01%, 100=0.04%, 250=2.71%, 500=12.84%, 750=20.88% 00:12:04.483 lat (usec) : 1000=21.78% 00:12:04.483 lat (msec) : 2=36.28%, 4=5.28%, 10=0.18%, 20=0.01% 00:12:04.483 cpu : usr=34.85%, sys=51.03%, ctx=30, majf=0, minf=764 00:12:04.483 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=5.7%, 16=21.8%, 32=68.3%, >=64=2.5% 00:12:04.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.483 complete : 0=0.0%, 4=97.7%, 8=0.1%, 16=0.1%, 32=0.4%, 64=1.7%, >=64=0.0% 00:12:04.483 issued rwts: total=224441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.483 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:04.483 00:12:04.483 Run status group 0 (all jobs): 00:12:04.483 READ: bw=175MiB/s (184MB/s), 175MiB/s-175MiB/s (184MB/s-184MB/s), io=877MiB (919MB), run=5002-5002msec 00:12:04.483 ----------------------------------------------------- 00:12:04.483 Suppressions used: 00:12:04.483 count bytes template 00:12:04.483 1 11 /usr/src/fio/parse.c 00:12:04.483 1 8 libtcmalloc_minimal.so 00:12:04.483 1 904 libcrypto.so 00:12:04.483 ----------------------------------------------------- 00:12:04.483 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:04.744 23:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:04.744 { 00:12:04.744 "subsystems": [ 00:12:04.744 { 00:12:04.744 "subsystem": "bdev", 00:12:04.744 "config": [ 00:12:04.744 { 00:12:04.744 "params": { 00:12:04.744 "io_mechanism": "libaio", 00:12:04.744 "conserve_cpu": false, 00:12:04.744 "filename": "/dev/nvme0n1", 00:12:04.744 "name": "xnvme_bdev" 00:12:04.744 }, 00:12:04.744 "method": "bdev_xnvme_create" 00:12:04.744 }, 00:12:04.744 { 00:12:04.744 "method": "bdev_wait_for_examine" 00:12:04.744 } 00:12:04.744 ] 00:12:04.744 } 00:12:04.744 ] 00:12:04.744 } 00:12:04.744 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:04.744 fio-3.35 00:12:04.744 Starting 1 thread 00:12:11.337 00:12:11.337 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69391: Thu Dec 5 23:49:43 2024 00:12:11.337 write: IOPS=13.3k, BW=52.1MiB/s (54.6MB/s)(260MiB/5003msec); 0 zone resets 00:12:11.337 slat (usec): min=3, max=696, avg=14.14, stdev=27.43 00:12:11.337 clat (usec): min=6, max=21556, avg=4737.20, stdev=5096.23 00:12:11.337 lat (usec): min=40, max=21561, avg=4751.34, stdev=5094.69 00:12:11.337 clat percentiles (usec): 00:12:11.337 | 1.00th=[ 47], 5.00th=[ 67], 10.00th=[ 97], 20.00th=[ 182], 00:12:11.337 | 30.00th=[ 289], 40.00th=[ 486], 50.00th=[ 840], 60.00th=[ 6652], 00:12:11.337 | 70.00th=[ 8029], 80.00th=[ 9634], 90.00th=[12125], 95.00th=[13960], 00:12:11.337 | 99.00th=[16909], 99.50th=[17957], 99.90th=[19792], 99.95th=[20055], 00:12:11.337 | 99.99th=[21103] 00:12:11.337 bw ( KiB/s): min=40200, max=60536, per=95.21%, avg=50752.00, stdev=6898.52, samples=9 00:12:11.337 iops : min=10050, max=15134, avg=12688.00, stdev=1724.63, samples=9 00:12:11.337 lat (usec) : 10=0.08%, 20=0.15%, 50=1.26%, 100=8.97%, 250=16.41% 00:12:11.337 lat (usec) : 500=13.53%, 750=8.41%, 1000=2.01% 00:12:11.337 lat (msec) : 2=1.19%, 4=0.62%, 10=29.32%, 20=17.98%, 50=0.07% 00:12:11.337 cpu : usr=81.11%, sys=10.20%, ctx=12, majf=0, minf=765 00:12:11.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=72.0%, >=64=27.9% 00:12:11.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.337 complete : 0=0.0%, 4=94.7%, 8=3.3%, 16=1.6%, 32=0.4%, 64=0.1%, >=64=0.0% 00:12:11.337 issued rwts: total=0,66670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.337 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:11.337 00:12:11.337 Run status group 0 (all jobs): 00:12:11.337 WRITE: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=260MiB (273MB), run=5003-5003msec 00:12:11.337 ----------------------------------------------------- 00:12:11.337 Suppressions used: 00:12:11.337 count bytes template 00:12:11.337 1 11 /usr/src/fio/parse.c 00:12:11.337 1 8 libtcmalloc_minimal.so 00:12:11.337 1 904 libcrypto.so 00:12:11.337 ----------------------------------------------------- 00:12:11.337 00:12:11.337 ************************************ 00:12:11.337 END TEST xnvme_fio_plugin 00:12:11.337 ************************************ 00:12:11.337 00:12:11.337 real 0m13.658s 00:12:11.337 user 0m8.531s 00:12:11.337 sys 0m3.585s 00:12:11.337 23:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.337 23:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:11.337 23:49:43 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:11.337 23:49:43 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:11.337 23:49:43 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:11.337 23:49:43 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:11.337 23:49:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:11.337 23:49:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.337 23:49:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:11.337 ************************************ 00:12:11.337 START TEST xnvme_rpc 00:12:11.337 ************************************ 00:12:11.337 23:49:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:11.337 23:49:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:11.337 23:49:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:11.337 23:49:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:11.337 23:49:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:11.337 23:49:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69472 00:12:11.337 23:49:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69472 00:12:11.337 23:49:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:11.337 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69472 ']' 00:12:11.337 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.337 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.337 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.337 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.337 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.604 [2024-12-05 23:49:44.109065] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:12:11.604 [2024-12-05 23:49:44.109382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69472 ] 00:12:11.604 [2024-12-05 23:49:44.277650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.864 [2024-12-05 23:49:44.377248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.435 xnvme_bdev 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.435 23:49:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69472 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69472 ']' 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69472 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:12.435 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.436 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69472 00:12:12.696 killing process with pid 69472 00:12:12.696 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.696 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.696 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69472' 00:12:12.696 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69472 00:12:12.696 23:49:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69472 00:12:14.077 ************************************ 00:12:14.077 END TEST xnvme_rpc 00:12:14.077 ************************************ 00:12:14.077 00:12:14.077 real 0m2.659s 00:12:14.077 user 0m2.792s 00:12:14.077 sys 0m0.375s 00:12:14.077 23:49:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.077 23:49:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.077 23:49:46 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:14.077 23:49:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.077 23:49:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.077 23:49:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.077 ************************************ 00:12:14.077 START TEST xnvme_bdevperf 00:12:14.077 ************************************ 00:12:14.077 23:49:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:14.077 23:49:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:14.077 23:49:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:14.077 23:49:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:14.077 23:49:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:14.077 23:49:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:14.077 23:49:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:14.077 23:49:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:14.077 { 00:12:14.077 "subsystems": [ 00:12:14.077 { 00:12:14.077 "subsystem": "bdev", 00:12:14.077 "config": [ 00:12:14.077 { 00:12:14.077 "params": { 00:12:14.077 "io_mechanism": "libaio", 00:12:14.077 "conserve_cpu": true, 00:12:14.077 "filename": "/dev/nvme0n1", 00:12:14.077 "name": "xnvme_bdev" 00:12:14.077 }, 00:12:14.077 "method": "bdev_xnvme_create" 00:12:14.077 }, 00:12:14.077 { 00:12:14.077 "method": "bdev_wait_for_examine" 00:12:14.077 } 00:12:14.077 ] 00:12:14.077 } 00:12:14.077 ] 00:12:14.077 } 00:12:14.077 [2024-12-05 23:49:46.759830] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:12:14.077 [2024-12-05 23:49:46.759941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69540 ] 00:12:14.339 [2024-12-05 23:49:46.919583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.339 [2024-12-05 23:49:47.016573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.599 Running I/O for 5 seconds... 00:12:16.934 45622.00 IOPS, 178.21 MiB/s [2024-12-05T23:49:50.590Z] 44970.50 IOPS, 175.67 MiB/s [2024-12-05T23:49:51.535Z] 45001.00 IOPS, 175.79 MiB/s [2024-12-05T23:49:52.479Z] 44968.00 IOPS, 175.66 MiB/s 00:12:19.770 Latency(us) 00:12:19.770 [2024-12-05T23:49:52.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.770 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:19.770 xnvme_bdev : 5.00 44655.38 174.44 0.00 0.00 1429.07 47.46 11796.48 00:12:19.770 [2024-12-05T23:49:52.479Z] =================================================================================================================== 00:12:19.770 [2024-12-05T23:49:52.479Z] Total : 44655.38 174.44 0.00 0.00 1429.07 47.46 11796.48 00:12:20.341 23:49:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:20.341 23:49:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:20.341 23:49:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:20.341 23:49:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:20.341 23:49:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:20.602 { 00:12:20.602 "subsystems": [ 00:12:20.602 { 00:12:20.602 "subsystem": "bdev", 00:12:20.602 "config": [ 00:12:20.602 { 00:12:20.602 "params": { 00:12:20.602 "io_mechanism": "libaio", 00:12:20.602 "conserve_cpu": true, 00:12:20.602 "filename": "/dev/nvme0n1", 00:12:20.602 "name": "xnvme_bdev" 00:12:20.602 }, 00:12:20.602 "method": "bdev_xnvme_create" 00:12:20.602 }, 00:12:20.602 { 00:12:20.602 "method": "bdev_wait_for_examine" 00:12:20.602 } 00:12:20.602 ] 00:12:20.602 } 00:12:20.602 ] 00:12:20.602 } 00:12:20.602 [2024-12-05 23:49:53.071722] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:12:20.602 [2024-12-05 23:49:53.072004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69615 ] 00:12:20.602 [2024-12-05 23:49:53.231104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.863 [2024-12-05 23:49:53.331127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.123 Running I/O for 5 seconds... 00:12:23.007 39276.00 IOPS, 153.42 MiB/s [2024-12-05T23:49:56.697Z] 38814.50 IOPS, 151.62 MiB/s [2024-12-05T23:49:57.658Z] 38794.33 IOPS, 151.54 MiB/s [2024-12-05T23:49:58.602Z] 38807.25 IOPS, 151.59 MiB/s [2024-12-05T23:49:58.602Z] 39149.00 IOPS, 152.93 MiB/s 00:12:25.893 Latency(us) 00:12:25.893 [2024-12-05T23:49:58.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.893 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:25.893 xnvme_bdev : 5.00 39129.48 152.85 0.00 0.00 1631.00 378.09 5016.02 00:12:25.893 [2024-12-05T23:49:58.602Z] =================================================================================================================== 00:12:25.893 [2024-12-05T23:49:58.602Z] Total : 39129.48 152.85 0.00 0.00 1631.00 378.09 5016.02 00:12:26.838 00:12:26.838 real 0m12.673s 00:12:26.838 user 0m4.729s 00:12:26.838 sys 0m5.711s 00:12:26.838 23:49:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.838 23:49:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:26.838 ************************************ 00:12:26.838 END TEST xnvme_bdevperf 00:12:26.838 ************************************ 00:12:26.838 23:49:59 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:26.838 23:49:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.838 23:49:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.838 23:49:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.838 ************************************ 00:12:26.838 START TEST xnvme_fio_plugin 00:12:26.838 ************************************ 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:26.838 23:49:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:26.838 { 00:12:26.838 "subsystems": [ 00:12:26.838 { 00:12:26.838 "subsystem": "bdev", 00:12:26.838 "config": [ 00:12:26.838 { 00:12:26.838 "params": { 00:12:26.838 "io_mechanism": "libaio", 00:12:26.838 "conserve_cpu": true, 00:12:26.838 "filename": "/dev/nvme0n1", 00:12:26.838 "name": "xnvme_bdev" 00:12:26.838 }, 00:12:26.838 "method": "bdev_xnvme_create" 00:12:26.838 }, 00:12:26.838 { 00:12:26.838 "method": "bdev_wait_for_examine" 00:12:26.838 } 00:12:26.838 ] 00:12:26.838 } 00:12:26.838 ] 00:12:26.838 } 00:12:27.100 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:27.100 fio-3.35 00:12:27.100 Starting 1 thread 00:12:33.692 00:12:33.692 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69735: Thu Dec 5 23:50:05 2024 00:12:33.692 read: IOPS=42.9k, BW=167MiB/s (176MB/s)(837MiB/5001msec) 00:12:33.693 slat (usec): min=4, max=1723, avg=18.41, stdev=65.28 00:12:33.693 clat (usec): min=104, max=4752, avg=986.81, stdev=503.20 00:12:33.693 lat (usec): min=156, max=4828, avg=1005.22, stdev=501.38 00:12:33.693 clat percentiles (usec): 00:12:33.693 | 1.00th=[ 200], 5.00th=[ 306], 10.00th=[ 412], 20.00th=[ 570], 00:12:33.693 | 30.00th=[ 693], 40.00th=[ 799], 50.00th=[ 914], 60.00th=[ 1037], 00:12:33.693 | 70.00th=[ 1172], 80.00th=[ 1352], 90.00th=[ 1631], 95.00th=[ 1893], 00:12:33.693 | 99.00th=[ 2606], 99.50th=[ 2933], 99.90th=[ 3523], 99.95th=[ 3752], 00:12:33.693 | 99.99th=[ 4178] 00:12:33.693 bw ( KiB/s): min=148880, max=203832, per=100.00%, avg=171447.11, stdev=16634.70, samples=9 00:12:33.693 iops : min=37220, max=50958, avg=42861.78, stdev=4158.67, samples=9 00:12:33.693 lat (usec) : 250=2.61%, 500=12.57%, 750=19.99%, 1000=22.21% 00:12:33.693 lat (msec) : 2=38.91%, 4=3.69%, 10=0.02% 00:12:33.693 cpu : usr=36.82%, sys=53.12%, ctx=13, majf=0, minf=764 00:12:33.693 IO depths : 1=0.3%, 2=1.0%, 4=3.3%, 8=9.4%, 16=24.5%, 32=59.6%, >=64=2.0% 00:12:33.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.693 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:33.693 issued rwts: total=214309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.693 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.693 00:12:33.693 Run status group 0 (all jobs): 00:12:33.693 READ: bw=167MiB/s (176MB/s), 167MiB/s-167MiB/s (176MB/s-176MB/s), io=837MiB (878MB), run=5001-5001msec 00:12:33.693 ----------------------------------------------------- 00:12:33.693 Suppressions used: 00:12:33.693 count bytes template 00:12:33.693 1 11 /usr/src/fio/parse.c 00:12:33.693 1 8 libtcmalloc_minimal.so 00:12:33.693 1 904 libcrypto.so 00:12:33.693 ----------------------------------------------------- 00:12:33.693 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:33.693 23:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.693 { 00:12:33.693 "subsystems": [ 00:12:33.693 { 00:12:33.693 "subsystem": "bdev", 00:12:33.693 "config": [ 00:12:33.693 { 00:12:33.693 "params": { 00:12:33.693 "io_mechanism": "libaio", 00:12:33.693 "conserve_cpu": true, 00:12:33.693 "filename": "/dev/nvme0n1", 00:12:33.693 "name": "xnvme_bdev" 00:12:33.693 }, 00:12:33.693 "method": "bdev_xnvme_create" 00:12:33.693 }, 00:12:33.693 { 00:12:33.693 "method": "bdev_wait_for_examine" 00:12:33.693 } 00:12:33.693 ] 00:12:33.693 } 00:12:33.693 ] 00:12:33.693 } 00:12:33.955 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:33.955 fio-3.35 00:12:33.955 Starting 1 thread 00:12:40.607 00:12:40.607 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69831: Thu Dec 5 23:50:12 2024 00:12:40.607 write: IOPS=42.8k, BW=167MiB/s (175MB/s)(836MiB/5001msec); 0 zone resets 00:12:40.607 slat (usec): min=3, max=1481, avg=19.50, stdev=53.11 00:12:40.607 clat (usec): min=82, max=4430, avg=944.29, stdev=534.33 00:12:40.607 lat (usec): min=153, max=4479, avg=963.79, stdev=535.59 00:12:40.607 clat percentiles (usec): 00:12:40.607 | 1.00th=[ 196], 5.00th=[ 289], 10.00th=[ 379], 20.00th=[ 529], 00:12:40.607 | 30.00th=[ 652], 40.00th=[ 758], 50.00th=[ 857], 60.00th=[ 955], 00:12:40.607 | 70.00th=[ 1074], 80.00th=[ 1237], 90.00th=[ 1582], 95.00th=[ 2008], 00:12:40.607 | 99.00th=[ 2900], 99.50th=[ 3130], 99.90th=[ 3654], 99.95th=[ 3818], 00:12:40.608 | 99.99th=[ 4047] 00:12:40.608 bw ( KiB/s): min=147792, max=201384, per=98.46%, avg=168449.78, stdev=17412.75, samples=9 00:12:40.608 iops : min=36948, max=50346, avg=42112.44, stdev=4353.19, samples=9 00:12:40.608 lat (usec) : 100=0.01%, 250=3.07%, 500=14.70%, 750=21.66%, 1000=24.53% 00:12:40.608 lat (msec) : 2=30.99%, 4=5.05%, 10=0.01% 00:12:40.608 cpu : usr=32.26%, sys=53.52%, ctx=120, majf=0, minf=765 00:12:40.608 IO depths : 1=0.2%, 2=1.0%, 4=3.8%, 8=10.6%, 16=25.1%, 32=57.4%, >=64=1.9% 00:12:40.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.608 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:12:40.608 issued rwts: total=0,213894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.608 00:12:40.608 Run status group 0 (all jobs): 00:12:40.608 WRITE: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=836MiB (876MB), run=5001-5001msec 00:12:40.608 ----------------------------------------------------- 00:12:40.608 Suppressions used: 00:12:40.608 count bytes template 00:12:40.608 1 11 /usr/src/fio/parse.c 00:12:40.608 1 8 libtcmalloc_minimal.so 00:12:40.608 1 904 libcrypto.so 00:12:40.608 ----------------------------------------------------- 00:12:40.608 00:12:40.608 ************************************ 00:12:40.608 END TEST xnvme_fio_plugin 00:12:40.608 ************************************ 00:12:40.608 00:12:40.608 real 0m13.808s 00:12:40.608 user 0m6.279s 00:12:40.608 sys 0m5.919s 00:12:40.608 23:50:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.608 23:50:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:40.608 23:50:13 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:40.608 23:50:13 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:40.608 23:50:13 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:40.608 23:50:13 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:40.608 23:50:13 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:40.608 23:50:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:40.608 23:50:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:40.608 23:50:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:40.608 23:50:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:40.608 23:50:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:40.608 23:50:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.608 23:50:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.608 ************************************ 00:12:40.608 START TEST xnvme_rpc 00:12:40.608 ************************************ 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69912 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69912 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69912 ']' 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.608 23:50:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:40.868 [2024-12-05 23:50:13.365894] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:12:40.869 [2024-12-05 23:50:13.366026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69912 ] 00:12:40.869 [2024-12-05 23:50:13.524189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.130 [2024-12-05 23:50:13.641370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.703 xnvme_bdev 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:41.703 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:41.704 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69912 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69912 ']' 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69912 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69912 00:12:41.965 killing process with pid 69912 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69912' 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69912 00:12:41.965 23:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69912 00:12:43.358 00:12:43.358 real 0m2.771s 00:12:43.358 user 0m2.819s 00:12:43.358 sys 0m0.407s 00:12:43.358 23:50:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.358 23:50:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.358 ************************************ 00:12:43.358 END TEST xnvme_rpc 00:12:43.358 ************************************ 00:12:43.619 23:50:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:43.619 23:50:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:43.619 23:50:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.619 23:50:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.619 ************************************ 00:12:43.619 START TEST xnvme_bdevperf 00:12:43.619 ************************************ 00:12:43.619 23:50:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:43.619 23:50:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:43.619 23:50:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:12:43.619 23:50:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:43.619 23:50:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:43.619 23:50:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:43.619 23:50:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:43.619 23:50:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.619 { 00:12:43.619 "subsystems": [ 00:12:43.619 { 00:12:43.619 "subsystem": "bdev", 00:12:43.619 "config": [ 00:12:43.619 { 00:12:43.619 "params": { 00:12:43.619 "io_mechanism": "io_uring", 00:12:43.619 "conserve_cpu": false, 00:12:43.619 "filename": "/dev/nvme0n1", 00:12:43.619 "name": "xnvme_bdev" 00:12:43.619 }, 00:12:43.619 "method": "bdev_xnvme_create" 00:12:43.619 }, 00:12:43.619 { 00:12:43.619 "method": "bdev_wait_for_examine" 00:12:43.619 } 00:12:43.619 ] 00:12:43.619 } 00:12:43.619 ] 00:12:43.619 } 00:12:43.619 [2024-12-05 23:50:16.175843] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:12:43.619 [2024-12-05 23:50:16.176162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69986 ] 00:12:43.879 [2024-12-05 23:50:16.335997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.879 [2024-12-05 23:50:16.452536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.141 Running I/O for 5 seconds... 00:12:46.027 46688.00 IOPS, 182.38 MiB/s [2024-12-05T23:50:20.126Z] 42859.00 IOPS, 167.42 MiB/s [2024-12-05T23:50:21.071Z] 40060.33 IOPS, 156.49 MiB/s [2024-12-05T23:50:22.014Z] 37988.00 IOPS, 148.39 MiB/s 00:12:49.305 Latency(us) 00:12:49.305 [2024-12-05T23:50:22.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.305 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:49.305 xnvme_bdev : 5.00 37199.65 145.31 0.00 0.00 1716.01 170.93 77836.60 00:12:49.305 [2024-12-05T23:50:22.014Z] =================================================================================================================== 00:12:49.305 [2024-12-05T23:50:22.014Z] Total : 37199.65 145.31 0.00 0.00 1716.01 170.93 77836.60 00:12:49.876 23:50:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:49.876 23:50:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:49.876 23:50:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:49.876 23:50:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:49.876 23:50:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:49.876 { 00:12:49.876 "subsystems": [ 00:12:49.876 { 00:12:49.876 "subsystem": "bdev", 00:12:49.876 "config": [ 00:12:49.876 { 00:12:49.876 "params": { 00:12:49.876 "io_mechanism": "io_uring", 00:12:49.876 "conserve_cpu": false, 00:12:49.876 "filename": "/dev/nvme0n1", 00:12:49.876 "name": "xnvme_bdev" 00:12:49.876 }, 00:12:49.876 "method": "bdev_xnvme_create" 00:12:49.876 }, 00:12:49.876 { 00:12:49.876 "method": "bdev_wait_for_examine" 00:12:49.876 } 00:12:49.876 ] 00:12:49.876 } 00:12:49.876 ] 00:12:49.876 } 00:12:50.138 [2024-12-05 23:50:22.584859] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:12:50.138 [2024-12-05 23:50:22.585225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70061 ] 00:12:50.138 [2024-12-05 23:50:22.748956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.400 [2024-12-05 23:50:22.874285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.684 Running I/O for 5 seconds... 00:12:52.565 5703.00 IOPS, 22.28 MiB/s [2024-12-05T23:50:26.213Z] 5957.00 IOPS, 23.27 MiB/s [2024-12-05T23:50:27.597Z] 5991.33 IOPS, 23.40 MiB/s [2024-12-05T23:50:28.169Z] 6016.25 IOPS, 23.50 MiB/s [2024-12-05T23:50:28.431Z] 6206.60 IOPS, 24.24 MiB/s 00:12:55.722 Latency(us) 00:12:55.722 [2024-12-05T23:50:28.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.722 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:55.722 xnvme_bdev : 5.01 6205.77 24.24 0.00 0.00 10297.05 56.71 30852.33 00:12:55.722 [2024-12-05T23:50:28.431Z] =================================================================================================================== 00:12:55.722 [2024-12-05T23:50:28.431Z] Total : 6205.77 24.24 0.00 0.00 10297.05 56.71 30852.33 00:12:56.289 00:12:56.289 real 0m12.854s 00:12:56.289 user 0m5.812s 00:12:56.289 sys 0m6.789s 00:12:56.289 ************************************ 00:12:56.289 END TEST xnvme_bdevperf 00:12:56.289 ************************************ 00:12:56.289 23:50:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.289 23:50:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:56.548 23:50:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:56.548 23:50:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:56.548 23:50:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.548 23:50:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.548 ************************************ 00:12:56.548 START TEST xnvme_fio_plugin 00:12:56.548 ************************************ 00:12:56.548 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:56.548 23:50:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:56.548 23:50:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:12:56.548 23:50:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:56.548 23:50:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:56.549 23:50:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:56.549 { 00:12:56.549 "subsystems": [ 00:12:56.549 { 00:12:56.549 "subsystem": "bdev", 00:12:56.549 "config": [ 00:12:56.549 { 00:12:56.549 "params": { 00:12:56.549 "io_mechanism": "io_uring", 00:12:56.549 "conserve_cpu": false, 00:12:56.549 "filename": "/dev/nvme0n1", 00:12:56.549 "name": "xnvme_bdev" 00:12:56.549 }, 00:12:56.549 "method": "bdev_xnvme_create" 00:12:56.549 }, 00:12:56.549 { 00:12:56.549 "method": "bdev_wait_for_examine" 00:12:56.549 } 00:12:56.549 ] 00:12:56.549 } 00:12:56.549 ] 00:12:56.549 } 00:12:56.549 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:56.549 fio-3.35 00:12:56.549 Starting 1 thread 00:13:03.129 00:13:03.129 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70175: Thu Dec 5 23:50:34 2024 00:13:03.129 read: IOPS=34.8k, BW=136MiB/s (143MB/s)(680MiB/5001msec) 00:13:03.129 slat (usec): min=2, max=465, avg= 4.10, stdev= 2.64 00:13:03.129 clat (usec): min=824, max=3410, avg=1670.36, stdev=295.13 00:13:03.129 lat (usec): min=827, max=3442, avg=1674.46, stdev=295.72 00:13:03.129 clat percentiles (usec): 00:13:03.129 | 1.00th=[ 1074], 5.00th=[ 1237], 10.00th=[ 1336], 20.00th=[ 1434], 00:13:03.129 | 30.00th=[ 1516], 40.00th=[ 1565], 50.00th=[ 1631], 60.00th=[ 1696], 00:13:03.129 | 70.00th=[ 1778], 80.00th=[ 1893], 90.00th=[ 2073], 95.00th=[ 2212], 00:13:03.129 | 99.00th=[ 2474], 99.50th=[ 2573], 99.90th=[ 2900], 99.95th=[ 3032], 00:13:03.129 | 99.99th=[ 3261] 00:13:03.129 bw ( KiB/s): min=128000, max=165376, per=100.00%, avg=139605.33, stdev=11502.92, samples=9 00:13:03.129 iops : min=32000, max=41344, avg=34901.33, stdev=2875.73, samples=9 00:13:03.129 lat (usec) : 1000=0.36% 00:13:03.129 lat (msec) : 2=86.20%, 4=13.44% 00:13:03.129 cpu : usr=31.76%, sys=66.70%, ctx=21, majf=0, minf=762 00:13:03.129 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:03.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.129 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:03.129 issued rwts: total=174121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.129 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:03.129 00:13:03.129 Run status group 0 (all jobs): 00:13:03.129 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=680MiB (713MB), run=5001-5001msec 00:13:03.390 ----------------------------------------------------- 00:13:03.390 Suppressions used: 00:13:03.390 count bytes template 00:13:03.390 1 11 /usr/src/fio/parse.c 00:13:03.390 1 8 libtcmalloc_minimal.so 00:13:03.391 1 904 libcrypto.so 00:13:03.391 ----------------------------------------------------- 00:13:03.391 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:03.391 23:50:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.391 { 00:13:03.391 "subsystems": [ 00:13:03.391 { 00:13:03.391 "subsystem": "bdev", 00:13:03.391 "config": [ 00:13:03.391 { 00:13:03.391 "params": { 00:13:03.391 "io_mechanism": "io_uring", 00:13:03.391 "conserve_cpu": false, 00:13:03.391 "filename": "/dev/nvme0n1", 00:13:03.391 "name": "xnvme_bdev" 00:13:03.391 }, 00:13:03.391 "method": "bdev_xnvme_create" 00:13:03.391 }, 00:13:03.391 { 00:13:03.391 "method": "bdev_wait_for_examine" 00:13:03.391 } 00:13:03.391 ] 00:13:03.391 } 00:13:03.391 ] 00:13:03.391 } 00:13:03.391 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:03.391 fio-3.35 00:13:03.391 Starting 1 thread 00:13:09.978 00:13:09.978 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70272: Thu Dec 5 23:50:41 2024 00:13:09.978 write: IOPS=34.7k, BW=136MiB/s (142MB/s)(679MiB/5001msec); 0 zone resets 00:13:09.978 slat (nsec): min=2901, max=81690, avg=4801.20, stdev=2891.13 00:13:09.978 clat (usec): min=499, max=6969, avg=1647.46, stdev=329.27 00:13:09.978 lat (usec): min=502, max=6973, avg=1652.26, stdev=330.11 00:13:09.978 clat percentiles (usec): 00:13:09.978 | 1.00th=[ 1037], 5.00th=[ 1172], 10.00th=[ 1254], 20.00th=[ 1369], 00:13:09.978 | 30.00th=[ 1467], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1696], 00:13:09.978 | 70.00th=[ 1778], 80.00th=[ 1893], 90.00th=[ 2073], 95.00th=[ 2245], 00:13:09.978 | 99.00th=[ 2573], 99.50th=[ 2737], 99.90th=[ 3064], 99.95th=[ 3359], 00:13:09.978 | 99.99th=[ 4948] 00:13:09.978 bw ( KiB/s): min=126688, max=155320, per=99.12%, avg=137739.56, stdev=9532.73, samples=9 00:13:09.978 iops : min=31672, max=38830, avg=34434.89, stdev=2383.18, samples=9 00:13:09.978 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.59% 00:13:09.978 lat (msec) : 2=85.84%, 4=13.51%, 10=0.03% 00:13:09.978 cpu : usr=33.90%, sys=64.58%, ctx=10, majf=0, minf=763 00:13:09.978 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:09.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.978 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:09.978 issued rwts: total=0,173739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.978 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:09.978 00:13:09.978 Run status group 0 (all jobs): 00:13:09.978 WRITE: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=679MiB (712MB), run=5001-5001msec 00:13:10.239 ----------------------------------------------------- 00:13:10.239 Suppressions used: 00:13:10.239 count bytes template 00:13:10.239 1 11 /usr/src/fio/parse.c 00:13:10.239 1 8 libtcmalloc_minimal.so 00:13:10.239 1 904 libcrypto.so 00:13:10.239 ----------------------------------------------------- 00:13:10.239 00:13:10.239 00:13:10.239 real 0m13.754s 00:13:10.239 user 0m6.187s 00:13:10.239 sys 0m7.084s 00:13:10.239 23:50:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.239 23:50:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:10.239 ************************************ 00:13:10.240 END TEST xnvme_fio_plugin 00:13:10.240 ************************************ 00:13:10.240 23:50:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:10.240 23:50:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:10.240 23:50:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:10.240 23:50:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:10.240 23:50:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.240 23:50:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.240 23:50:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.240 ************************************ 00:13:10.240 START TEST xnvme_rpc 00:13:10.240 ************************************ 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70353 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70353 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70353 ']' 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.240 23:50:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:10.240 [2024-12-05 23:50:42.941532] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:13:10.240 [2024-12-05 23:50:42.942419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70353 ] 00:13:10.500 [2024-12-05 23:50:43.099338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.758 [2024-12-05 23:50:43.235857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.328 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.328 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.329 xnvme_bdev 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.329 23:50:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:11.329 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.329 23:50:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:11.329 23:50:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:11.329 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.329 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70353 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70353 ']' 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70353 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70353 00:13:11.597 killing process with pid 70353 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70353' 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70353 00:13:11.597 23:50:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70353 00:13:12.973 00:13:12.973 real 0m2.762s 00:13:12.973 user 0m2.827s 00:13:12.973 sys 0m0.421s 00:13:12.973 23:50:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.973 ************************************ 00:13:12.973 END TEST xnvme_rpc 00:13:12.973 ************************************ 00:13:12.973 23:50:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.973 23:50:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:12.973 23:50:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:12.973 23:50:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.973 23:50:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.973 ************************************ 00:13:12.973 START TEST xnvme_bdevperf 00:13:12.973 ************************************ 00:13:12.973 23:50:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:12.973 23:50:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:12.973 23:50:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:12.973 23:50:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:12.973 23:50:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:12.973 23:50:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:12.973 23:50:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:12.973 23:50:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.231 { 00:13:13.231 "subsystems": [ 00:13:13.231 { 00:13:13.231 "subsystem": "bdev", 00:13:13.231 "config": [ 00:13:13.231 { 00:13:13.231 "params": { 00:13:13.231 "io_mechanism": "io_uring", 00:13:13.231 "conserve_cpu": true, 00:13:13.231 "filename": "/dev/nvme0n1", 00:13:13.231 "name": "xnvme_bdev" 00:13:13.231 }, 00:13:13.231 "method": "bdev_xnvme_create" 00:13:13.231 }, 00:13:13.231 { 00:13:13.231 "method": "bdev_wait_for_examine" 00:13:13.231 } 00:13:13.231 ] 00:13:13.231 } 00:13:13.231 ] 00:13:13.231 } 00:13:13.231 [2024-12-05 23:50:45.738730] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:13:13.231 [2024-12-05 23:50:45.738840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70427 ] 00:13:13.231 [2024-12-05 23:50:45.899257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.489 [2024-12-05 23:50:45.993425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.749 Running I/O for 5 seconds... 00:13:15.625 38228.00 IOPS, 149.33 MiB/s [2024-12-05T23:50:49.279Z] 36388.00 IOPS, 142.14 MiB/s [2024-12-05T23:50:50.663Z] 35461.00 IOPS, 138.52 MiB/s [2024-12-05T23:50:51.603Z] 35253.50 IOPS, 137.71 MiB/s [2024-12-05T23:50:51.603Z] 36288.20 IOPS, 141.75 MiB/s 00:13:18.894 Latency(us) 00:13:18.894 [2024-12-05T23:50:51.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.894 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:18.894 xnvme_bdev : 5.00 36267.10 141.67 0.00 0.00 1760.47 245.76 131475.30 00:13:18.894 [2024-12-05T23:50:51.603Z] =================================================================================================================== 00:13:18.894 [2024-12-05T23:50:51.603Z] Total : 36267.10 141.67 0.00 0.00 1760.47 245.76 131475.30 00:13:19.464 23:50:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:19.464 23:50:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:19.464 23:50:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:19.464 23:50:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:19.464 23:50:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:19.464 { 00:13:19.464 "subsystems": [ 00:13:19.464 { 00:13:19.464 "subsystem": "bdev", 00:13:19.464 "config": [ 00:13:19.464 { 00:13:19.464 "params": { 00:13:19.464 "io_mechanism": "io_uring", 00:13:19.464 "conserve_cpu": true, 00:13:19.464 "filename": "/dev/nvme0n1", 00:13:19.464 "name": "xnvme_bdev" 00:13:19.464 }, 00:13:19.464 "method": "bdev_xnvme_create" 00:13:19.464 }, 00:13:19.464 { 00:13:19.464 "method": "bdev_wait_for_examine" 00:13:19.464 } 00:13:19.464 ] 00:13:19.464 } 00:13:19.464 ] 00:13:19.464 } 00:13:19.464 [2024-12-05 23:50:52.014078] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:13:19.464 [2024-12-05 23:50:52.014181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70502 ] 00:13:19.725 [2024-12-05 23:50:52.174617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.725 [2024-12-05 23:50:52.270635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.987 Running I/O for 5 seconds... 00:13:21.872 9522.00 IOPS, 37.20 MiB/s [2024-12-05T23:50:55.966Z] 14656.00 IOPS, 57.25 MiB/s [2024-12-05T23:50:56.907Z] 15650.67 IOPS, 61.14 MiB/s [2024-12-05T23:50:57.853Z] 16420.00 IOPS, 64.14 MiB/s [2024-12-05T23:50:57.853Z] 17084.80 IOPS, 66.74 MiB/s 00:13:25.144 Latency(us) 00:13:25.144 [2024-12-05T23:50:57.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.144 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:25.144 xnvme_bdev : 5.01 17075.25 66.70 0.00 0.00 3741.48 71.68 25508.63 00:13:25.144 [2024-12-05T23:50:57.853Z] =================================================================================================================== 00:13:25.144 [2024-12-05T23:50:57.853Z] Total : 17075.25 66.70 0.00 0.00 3741.48 71.68 25508.63 00:13:25.720 00:13:25.720 real 0m12.691s 00:13:25.720 user 0m8.923s 00:13:25.720 sys 0m2.733s 00:13:25.720 ************************************ 00:13:25.720 END TEST xnvme_bdevperf 00:13:25.720 ************************************ 00:13:25.720 23:50:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.720 23:50:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:25.720 23:50:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:25.720 23:50:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.720 23:50:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.720 23:50:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.986 ************************************ 00:13:25.986 START TEST xnvme_fio_plugin 00:13:25.986 ************************************ 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:25.986 23:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:25.986 { 00:13:25.986 "subsystems": [ 00:13:25.986 { 00:13:25.986 "subsystem": "bdev", 00:13:25.986 "config": [ 00:13:25.986 { 00:13:25.986 "params": { 00:13:25.986 "io_mechanism": "io_uring", 00:13:25.986 "conserve_cpu": true, 00:13:25.986 "filename": "/dev/nvme0n1", 00:13:25.986 "name": "xnvme_bdev" 00:13:25.986 }, 00:13:25.986 "method": "bdev_xnvme_create" 00:13:25.986 }, 00:13:25.986 { 00:13:25.986 "method": "bdev_wait_for_examine" 00:13:25.986 } 00:13:25.986 ] 00:13:25.986 } 00:13:25.986 ] 00:13:25.986 } 00:13:25.986 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:25.986 fio-3.35 00:13:25.986 Starting 1 thread 00:13:32.573 00:13:32.573 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70616: Thu Dec 5 23:51:04 2024 00:13:32.573 read: IOPS=54.3k, BW=212MiB/s (222MB/s)(1061MiB/5002msec) 00:13:32.573 slat (usec): min=2, max=104, avg= 3.73, stdev= 1.36 00:13:32.573 clat (usec): min=75, max=93275, avg=1036.04, stdev=352.20 00:13:32.573 lat (usec): min=78, max=93279, avg=1039.77, stdev=352.34 00:13:32.573 clat percentiles (usec): 00:13:32.573 | 1.00th=[ 701], 5.00th=[ 775], 10.00th=[ 824], 20.00th=[ 873], 00:13:32.573 | 30.00th=[ 914], 40.00th=[ 955], 50.00th=[ 988], 60.00th=[ 1037], 00:13:32.573 | 70.00th=[ 1090], 80.00th=[ 1156], 90.00th=[ 1287], 95.00th=[ 1401], 00:13:32.573 | 99.00th=[ 1795], 99.50th=[ 2073], 99.90th=[ 4424], 99.95th=[ 6194], 00:13:32.573 | 99.99th=[10421] 00:13:32.573 bw ( KiB/s): min=201216, max=229376, per=100.00%, avg=217694.22, stdev=9670.64, samples=9 00:13:32.573 iops : min=50304, max=57344, avg=54423.56, stdev=2417.66, samples=9 00:13:32.573 lat (usec) : 100=0.01%, 250=0.02%, 500=0.13%, 750=3.04%, 1000=49.08% 00:13:32.573 lat (msec) : 2=47.15%, 4=0.44%, 10=0.12%, 20=0.01%, 100=0.01% 00:13:32.573 cpu : usr=38.01%, sys=58.39%, ctx=9, majf=0, minf=762 00:13:32.573 IO depths : 1=1.3%, 2=2.8%, 4=6.0%, 8=12.5%, 16=25.1%, 32=50.8%, >=64=1.6% 00:13:32.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.573 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:13:32.573 issued rwts: total=271525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.573 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.573 00:13:32.573 Run status group 0 (all jobs): 00:13:32.573 READ: bw=212MiB/s (222MB/s), 212MiB/s-212MiB/s (222MB/s-222MB/s), io=1061MiB (1112MB), run=5002-5002msec 00:13:32.573 ----------------------------------------------------- 00:13:32.573 Suppressions used: 00:13:32.573 count bytes template 00:13:32.573 1 11 /usr/src/fio/parse.c 00:13:32.573 1 8 libtcmalloc_minimal.so 00:13:32.573 1 904 libcrypto.so 00:13:32.573 ----------------------------------------------------- 00:13:32.573 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:32.573 23:51:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:32.573 { 00:13:32.573 "subsystems": [ 00:13:32.573 { 00:13:32.573 "subsystem": "bdev", 00:13:32.573 "config": [ 00:13:32.573 { 00:13:32.573 "params": { 00:13:32.573 "io_mechanism": "io_uring", 00:13:32.573 "conserve_cpu": true, 00:13:32.573 "filename": "/dev/nvme0n1", 00:13:32.573 "name": "xnvme_bdev" 00:13:32.573 }, 00:13:32.573 "method": "bdev_xnvme_create" 00:13:32.573 }, 00:13:32.573 { 00:13:32.573 "method": "bdev_wait_for_examine" 00:13:32.573 } 00:13:32.573 ] 00:13:32.573 } 00:13:32.573 ] 00:13:32.573 } 00:13:32.834 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:32.834 fio-3.35 00:13:32.834 Starting 1 thread 00:13:39.431 00:13:39.431 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70708: Thu Dec 5 23:51:11 2024 00:13:39.431 write: IOPS=46.6k, BW=182MiB/s (191MB/s)(910MiB/5001msec); 0 zone resets 00:13:39.431 slat (usec): min=2, max=607, avg= 3.67, stdev= 1.66 00:13:39.431 clat (usec): min=91, max=277009, avg=1235.75, stdev=5919.31 00:13:39.431 lat (usec): min=95, max=277012, avg=1239.42, stdev=5919.31 00:13:39.431 clat percentiles (usec): 00:13:39.431 | 1.00th=[ 701], 5.00th=[ 783], 10.00th=[ 824], 20.00th=[ 873], 00:13:39.431 | 30.00th=[ 914], 40.00th=[ 955], 50.00th=[ 996], 60.00th=[ 1045], 00:13:39.431 | 70.00th=[ 1090], 80.00th=[ 1172], 90.00th=[ 1270], 95.00th=[ 1352], 00:13:39.431 | 99.00th=[ 1614], 99.50th=[ 1762], 99.90th=[ 91751], 99.95th=[156238], 00:13:39.431 | 99.99th=[274727] 00:13:39.431 bw ( KiB/s): min=56456, max=228840, per=100.00%, avg=196419.56, stdev=54110.75, samples=9 00:13:39.431 iops : min=14114, max=57210, avg=49104.89, stdev=13527.69, samples=9 00:13:39.431 lat (usec) : 100=0.01%, 250=0.02%, 500=0.09%, 750=2.96%, 1000=47.63% 00:13:39.431 lat (msec) : 2=49.03%, 4=0.08%, 20=0.03%, 50=0.01%, 100=0.08% 00:13:39.431 lat (msec) : 250=0.05%, 500=0.03% 00:13:39.431 cpu : usr=55.06%, sys=42.42%, ctx=13, majf=0, minf=763 00:13:39.431 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:39.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.431 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:39.431 issued rwts: total=0,232840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.431 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.431 00:13:39.431 Run status group 0 (all jobs): 00:13:39.431 WRITE: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=910MiB (954MB), run=5001-5001msec 00:13:39.431 ----------------------------------------------------- 00:13:39.431 Suppressions used: 00:13:39.431 count bytes template 00:13:39.431 1 11 /usr/src/fio/parse.c 00:13:39.431 1 8 libtcmalloc_minimal.so 00:13:39.431 1 904 libcrypto.so 00:13:39.431 ----------------------------------------------------- 00:13:39.431 00:13:39.431 00:13:39.431 real 0m13.517s 00:13:39.431 user 0m7.340s 00:13:39.431 sys 0m5.543s 00:13:39.431 23:51:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.431 ************************************ 00:13:39.431 END TEST xnvme_fio_plugin 00:13:39.431 ************************************ 00:13:39.431 23:51:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:39.431 23:51:11 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:39.431 23:51:11 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:39.431 23:51:11 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:39.431 23:51:11 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:39.431 23:51:11 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:39.431 23:51:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:39.431 23:51:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:39.431 23:51:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:39.431 23:51:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:39.431 23:51:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:39.431 23:51:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.431 23:51:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.431 ************************************ 00:13:39.431 START TEST xnvme_rpc 00:13:39.431 ************************************ 00:13:39.431 23:51:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:39.431 23:51:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:39.431 23:51:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:39.431 23:51:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:39.431 23:51:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:39.431 23:51:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70794 00:13:39.431 23:51:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70794 00:13:39.432 23:51:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.432 23:51:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70794 ']' 00:13:39.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.432 23:51:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.432 23:51:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.432 23:51:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.432 23:51:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.432 23:51:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.432 [2024-12-05 23:51:12.066905] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:13:39.432 [2024-12-05 23:51:12.067050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70794 ] 00:13:39.693 [2024-12-05 23:51:12.224937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.693 [2024-12-05 23:51:12.319113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.264 23:51:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.264 23:51:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:40.264 23:51:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:40.264 23:51:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.264 23:51:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.525 xnvme_bdev 00:13:40.525 23:51:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.525 23:51:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:40.525 23:51:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:40.525 23:51:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.525 23:51:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.525 23:51:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:40.525 23:51:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70794 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70794 ']' 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70794 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70794 00:13:40.525 killing process with pid 70794 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70794' 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70794 00:13:40.525 23:51:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70794 00:13:42.439 ************************************ 00:13:42.439 END TEST xnvme_rpc 00:13:42.439 ************************************ 00:13:42.439 00:13:42.439 real 0m2.815s 00:13:42.439 user 0m2.876s 00:13:42.439 sys 0m0.411s 00:13:42.439 23:51:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.439 23:51:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.439 23:51:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:42.439 23:51:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:42.439 23:51:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.439 23:51:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.439 ************************************ 00:13:42.439 START TEST xnvme_bdevperf 00:13:42.439 ************************************ 00:13:42.439 23:51:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:42.439 23:51:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:42.439 23:51:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:42.439 23:51:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:42.439 23:51:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:42.440 23:51:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:42.440 23:51:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:42.440 23:51:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:42.440 { 00:13:42.440 "subsystems": [ 00:13:42.440 { 00:13:42.440 "subsystem": "bdev", 00:13:42.440 "config": [ 00:13:42.440 { 00:13:42.440 "params": { 00:13:42.440 "io_mechanism": "io_uring_cmd", 00:13:42.440 "conserve_cpu": false, 00:13:42.440 "filename": "/dev/ng0n1", 00:13:42.440 "name": "xnvme_bdev" 00:13:42.440 }, 00:13:42.440 "method": "bdev_xnvme_create" 00:13:42.440 }, 00:13:42.440 { 00:13:42.440 "method": "bdev_wait_for_examine" 00:13:42.440 } 00:13:42.440 ] 00:13:42.440 } 00:13:42.440 ] 00:13:42.440 } 00:13:42.440 [2024-12-05 23:51:14.944884] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:13:42.440 [2024-12-05 23:51:14.945464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70862 ] 00:13:42.440 [2024-12-05 23:51:15.111347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.700 [2024-12-05 23:51:15.238216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.959 Running I/O for 5 seconds... 00:13:44.856 33901.00 IOPS, 132.43 MiB/s [2024-12-05T23:51:18.953Z] 36230.00 IOPS, 141.52 MiB/s [2024-12-05T23:51:19.896Z] 40217.00 IOPS, 157.10 MiB/s [2024-12-05T23:51:20.839Z] 40018.50 IOPS, 156.32 MiB/s [2024-12-05T23:51:20.839Z] 39043.20 IOPS, 152.51 MiB/s 00:13:48.130 Latency(us) 00:13:48.130 [2024-12-05T23:51:20.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.130 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:48.130 xnvme_bdev : 5.01 39012.61 152.39 0.00 0.00 1636.06 233.16 11040.30 00:13:48.130 [2024-12-05T23:51:20.839Z] =================================================================================================================== 00:13:48.130 [2024-12-05T23:51:20.839Z] Total : 39012.61 152.39 0.00 0.00 1636.06 233.16 11040.30 00:13:48.702 23:51:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:48.702 23:51:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:48.702 23:51:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:48.702 23:51:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:48.702 23:51:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:48.702 { 00:13:48.702 "subsystems": [ 00:13:48.702 { 00:13:48.702 "subsystem": "bdev", 00:13:48.702 "config": [ 00:13:48.702 { 00:13:48.702 "params": { 00:13:48.702 "io_mechanism": "io_uring_cmd", 00:13:48.702 "conserve_cpu": false, 00:13:48.702 "filename": "/dev/ng0n1", 00:13:48.702 "name": "xnvme_bdev" 00:13:48.702 }, 00:13:48.702 "method": "bdev_xnvme_create" 00:13:48.702 }, 00:13:48.702 { 00:13:48.702 "method": "bdev_wait_for_examine" 00:13:48.702 } 00:13:48.702 ] 00:13:48.702 } 00:13:48.702 ] 00:13:48.702 } 00:13:48.702 [2024-12-05 23:51:21.408211] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:13:48.702 [2024-12-05 23:51:21.408369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70941 ] 00:13:48.964 [2024-12-05 23:51:21.575154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.226 [2024-12-05 23:51:21.703784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.488 Running I/O for 5 seconds... 00:13:51.392 33594.00 IOPS, 131.23 MiB/s [2024-12-05T23:51:25.074Z] 27273.50 IOPS, 106.54 MiB/s [2024-12-05T23:51:26.015Z] 25337.67 IOPS, 98.98 MiB/s [2024-12-05T23:51:27.405Z] 24707.50 IOPS, 96.51 MiB/s [2024-12-05T23:51:27.405Z] 24229.40 IOPS, 94.65 MiB/s 00:13:54.696 Latency(us) 00:13:54.696 [2024-12-05T23:51:27.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.696 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:54.696 xnvme_bdev : 5.01 24196.79 94.52 0.00 0.00 2639.38 85.07 23794.61 00:13:54.696 [2024-12-05T23:51:27.405Z] =================================================================================================================== 00:13:54.696 [2024-12-05T23:51:27.405Z] Total : 24196.79 94.52 0.00 0.00 2639.38 85.07 23794.61 00:13:55.270 23:51:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:55.270 23:51:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:13:55.270 23:51:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:55.270 23:51:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:55.270 23:51:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:55.270 { 00:13:55.270 "subsystems": [ 00:13:55.270 { 00:13:55.270 "subsystem": "bdev", 00:13:55.270 "config": [ 00:13:55.270 { 00:13:55.270 "params": { 00:13:55.270 "io_mechanism": "io_uring_cmd", 00:13:55.270 "conserve_cpu": false, 00:13:55.270 "filename": "/dev/ng0n1", 00:13:55.270 "name": "xnvme_bdev" 00:13:55.270 }, 00:13:55.270 "method": "bdev_xnvme_create" 00:13:55.270 }, 00:13:55.270 { 00:13:55.270 "method": "bdev_wait_for_examine" 00:13:55.270 } 00:13:55.270 ] 00:13:55.270 } 00:13:55.270 ] 00:13:55.270 } 00:13:55.270 [2024-12-05 23:51:27.876928] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:13:55.270 [2024-12-05 23:51:27.877081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71014 ] 00:13:55.530 [2024-12-05 23:51:28.041466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.530 [2024-12-05 23:51:28.168148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.792 Running I/O for 5 seconds... 00:13:58.123 76800.00 IOPS, 300.00 MiB/s [2024-12-05T23:51:31.764Z] 77152.00 IOPS, 301.38 MiB/s [2024-12-05T23:51:32.700Z] 77845.33 IOPS, 304.08 MiB/s [2024-12-05T23:51:33.637Z] 78688.00 IOPS, 307.38 MiB/s 00:14:00.928 Latency(us) 00:14:00.928 [2024-12-05T23:51:33.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.928 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:00.928 xnvme_bdev : 5.00 77989.41 304.65 0.00 0.00 817.26 526.18 2923.91 00:14:00.928 [2024-12-05T23:51:33.637Z] =================================================================================================================== 00:14:00.928 [2024-12-05T23:51:33.637Z] Total : 77989.41 304.65 0.00 0.00 817.26 526.18 2923.91 00:14:01.872 23:51:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:01.872 23:51:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:01.872 23:51:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:01.872 23:51:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:01.872 23:51:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:01.872 { 00:14:01.872 "subsystems": [ 00:14:01.872 { 00:14:01.872 "subsystem": "bdev", 00:14:01.872 "config": [ 00:14:01.872 { 00:14:01.872 "params": { 00:14:01.872 "io_mechanism": "io_uring_cmd", 00:14:01.872 "conserve_cpu": false, 00:14:01.872 "filename": "/dev/ng0n1", 00:14:01.872 "name": "xnvme_bdev" 00:14:01.872 }, 00:14:01.872 "method": "bdev_xnvme_create" 00:14:01.872 }, 00:14:01.872 { 00:14:01.872 "method": "bdev_wait_for_examine" 00:14:01.872 } 00:14:01.872 ] 00:14:01.872 } 00:14:01.872 ] 00:14:01.872 } 00:14:01.872 [2024-12-05 23:51:34.321616] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:14:01.872 [2024-12-05 23:51:34.321759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71088 ] 00:14:01.872 [2024-12-05 23:51:34.484851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.132 [2024-12-05 23:51:34.615031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.393 Running I/O for 5 seconds... 00:14:04.277 143.00 IOPS, 0.56 MiB/s [2024-12-05T23:51:37.930Z] 169.50 IOPS, 0.66 MiB/s [2024-12-05T23:51:38.919Z] 156.33 IOPS, 0.61 MiB/s [2024-12-05T23:51:40.305Z] 150.50 IOPS, 0.59 MiB/s [2024-12-05T23:51:40.305Z] 162.60 IOPS, 0.64 MiB/s 00:14:07.596 Latency(us) 00:14:07.596 [2024-12-05T23:51:40.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.596 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:07.596 xnvme_bdev : 5.24 167.34 0.65 0.00 0.00 374091.79 139.42 1213121.77 00:14:07.596 [2024-12-05T23:51:40.305Z] =================================================================================================================== 00:14:07.596 [2024-12-05T23:51:40.305Z] Total : 167.34 0.65 0.00 0.00 374091.79 139.42 1213121.77 00:14:08.169 00:14:08.169 real 0m25.844s 00:14:08.169 user 0m14.496s 00:14:08.169 sys 0m10.858s 00:14:08.169 23:51:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.169 ************************************ 00:14:08.169 END TEST xnvme_bdevperf 00:14:08.169 ************************************ 00:14:08.169 23:51:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:08.169 23:51:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:08.169 23:51:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:08.169 23:51:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.169 23:51:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.169 ************************************ 00:14:08.169 START TEST xnvme_fio_plugin 00:14:08.169 ************************************ 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:08.169 23:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:08.169 { 00:14:08.169 "subsystems": [ 00:14:08.169 { 00:14:08.169 "subsystem": "bdev", 00:14:08.169 "config": [ 00:14:08.169 { 00:14:08.169 "params": { 00:14:08.169 "io_mechanism": "io_uring_cmd", 00:14:08.169 "conserve_cpu": false, 00:14:08.169 "filename": "/dev/ng0n1", 00:14:08.169 "name": "xnvme_bdev" 00:14:08.169 }, 00:14:08.169 "method": "bdev_xnvme_create" 00:14:08.169 }, 00:14:08.169 { 00:14:08.169 "method": "bdev_wait_for_examine" 00:14:08.169 } 00:14:08.169 ] 00:14:08.169 } 00:14:08.169 ] 00:14:08.169 } 00:14:08.431 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:08.431 fio-3.35 00:14:08.431 Starting 1 thread 00:14:15.017 00:14:15.017 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71206: Thu Dec 5 23:51:46 2024 00:14:15.017 read: IOPS=42.6k, BW=166MiB/s (174MB/s)(831MiB/5001msec) 00:14:15.017 slat (usec): min=2, max=309, avg= 3.61, stdev= 1.91 00:14:15.017 clat (usec): min=171, max=3707, avg=1360.04, stdev=293.93 00:14:15.017 lat (usec): min=174, max=3710, avg=1363.65, stdev=294.21 00:14:15.017 clat percentiles (usec): 00:14:15.017 | 1.00th=[ 824], 5.00th=[ 938], 10.00th=[ 1012], 20.00th=[ 1123], 00:14:15.017 | 30.00th=[ 1188], 40.00th=[ 1270], 50.00th=[ 1336], 60.00th=[ 1401], 00:14:15.017 | 70.00th=[ 1483], 80.00th=[ 1582], 90.00th=[ 1745], 95.00th=[ 1893], 00:14:15.017 | 99.00th=[ 2212], 99.50th=[ 2343], 99.90th=[ 2671], 99.95th=[ 2835], 00:14:15.017 | 99.99th=[ 3654] 00:14:15.017 bw ( KiB/s): min=147712, max=178688, per=97.99%, avg=166826.67, stdev=9387.73, samples=9 00:14:15.017 iops : min=36928, max=44672, avg=41706.67, stdev=2346.93, samples=9 00:14:15.017 lat (usec) : 250=0.01%, 750=0.22%, 1000=8.78% 00:14:15.017 lat (msec) : 2=87.98%, 4=3.02% 00:14:15.017 cpu : usr=37.42%, sys=61.08%, ctx=80, majf=0, minf=762 00:14:15.017 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:15.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.017 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:15.017 issued rwts: total=212844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.017 00:14:15.017 Run status group 0 (all jobs): 00:14:15.017 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=831MiB (872MB), run=5001-5001msec 00:14:15.017 ----------------------------------------------------- 00:14:15.017 Suppressions used: 00:14:15.017 count bytes template 00:14:15.017 1 11 /usr/src/fio/parse.c 00:14:15.017 1 8 libtcmalloc_minimal.so 00:14:15.017 1 904 libcrypto.so 00:14:15.017 ----------------------------------------------------- 00:14:15.017 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:15.017 23:51:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.017 { 00:14:15.017 "subsystems": [ 00:14:15.017 { 00:14:15.017 "subsystem": "bdev", 00:14:15.017 "config": [ 00:14:15.017 { 00:14:15.017 "params": { 00:14:15.017 "io_mechanism": "io_uring_cmd", 00:14:15.017 "conserve_cpu": false, 00:14:15.017 "filename": "/dev/ng0n1", 00:14:15.017 "name": "xnvme_bdev" 00:14:15.017 }, 00:14:15.017 "method": "bdev_xnvme_create" 00:14:15.017 }, 00:14:15.017 { 00:14:15.017 "method": "bdev_wait_for_examine" 00:14:15.017 } 00:14:15.017 ] 00:14:15.017 } 00:14:15.017 ] 00:14:15.017 } 00:14:15.017 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:15.017 fio-3.35 00:14:15.017 Starting 1 thread 00:14:21.634 00:14:21.634 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71297: Thu Dec 5 23:51:53 2024 00:14:21.634 write: IOPS=35.7k, BW=140MiB/s (146MB/s)(698MiB/5001msec); 0 zone resets 00:14:21.634 slat (nsec): min=2911, max=89599, avg=4105.06, stdev=2070.21 00:14:21.634 clat (usec): min=69, max=15314, avg=1632.71, stdev=642.83 00:14:21.634 lat (usec): min=73, max=15317, avg=1636.81, stdev=642.98 00:14:21.634 clat percentiles (usec): 00:14:21.634 | 1.00th=[ 758], 5.00th=[ 1057], 10.00th=[ 1188], 20.00th=[ 1319], 00:14:21.634 | 30.00th=[ 1418], 40.00th=[ 1500], 50.00th=[ 1565], 60.00th=[ 1647], 00:14:21.634 | 70.00th=[ 1729], 80.00th=[ 1827], 90.00th=[ 2008], 95.00th=[ 2212], 00:14:21.634 | 99.00th=[ 4359], 99.50th=[ 6194], 99.90th=[ 8979], 99.95th=[ 9765], 00:14:21.634 | 99.99th=[13566] 00:14:21.634 bw ( KiB/s): min=127752, max=161208, per=98.00%, avg=140084.44, stdev=9246.94, samples=9 00:14:21.634 iops : min=31938, max=40302, avg=35021.11, stdev=2311.73, samples=9 00:14:21.634 lat (usec) : 100=0.01%, 250=0.04%, 500=0.28%, 750=0.67%, 1000=2.33% 00:14:21.634 lat (msec) : 2=86.26%, 4=9.31%, 10=1.07%, 20=0.04% 00:14:21.634 cpu : usr=36.64%, sys=61.98%, ctx=14, majf=0, minf=763 00:14:21.634 IO depths : 1=1.4%, 2=2.8%, 4=5.7%, 8=11.6%, 16=24.2%, 32=52.4%, >=64=1.9% 00:14:21.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.634 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:21.634 issued rwts: total=0,178719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:21.634 00:14:21.634 Run status group 0 (all jobs): 00:14:21.634 WRITE: bw=140MiB/s (146MB/s), 140MiB/s-140MiB/s (146MB/s-146MB/s), io=698MiB (732MB), run=5001-5001msec 00:14:21.895 ----------------------------------------------------- 00:14:21.895 Suppressions used: 00:14:21.895 count bytes template 00:14:21.895 1 11 /usr/src/fio/parse.c 00:14:21.895 1 8 libtcmalloc_minimal.so 00:14:21.895 1 904 libcrypto.so 00:14:21.895 ----------------------------------------------------- 00:14:21.895 00:14:21.895 00:14:21.895 real 0m13.627s 00:14:21.895 user 0m6.419s 00:14:21.895 sys 0m6.744s 00:14:21.895 23:51:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.895 23:51:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:21.895 ************************************ 00:14:21.895 END TEST xnvme_fio_plugin 00:14:21.895 ************************************ 00:14:21.895 23:51:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:21.895 23:51:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:21.895 23:51:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:21.895 23:51:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:21.895 23:51:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:21.895 23:51:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.895 23:51:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.895 ************************************ 00:14:21.895 START TEST xnvme_rpc 00:14:21.895 ************************************ 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71381 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71381 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71381 ']' 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.895 23:51:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.895 [2024-12-05 23:51:54.566494] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:14:21.895 [2024-12-05 23:51:54.566635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71381 ] 00:14:22.155 [2024-12-05 23:51:54.731998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.155 [2024-12-05 23:51:54.857749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.095 xnvme_bdev 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71381 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71381 ']' 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71381 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71381 00:14:23.095 killing process with pid 71381 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71381' 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71381 00:14:23.095 23:51:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71381 00:14:25.042 00:14:25.043 real 0m2.920s 00:14:25.043 user 0m2.913s 00:14:25.043 sys 0m0.486s 00:14:25.043 23:51:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.043 ************************************ 00:14:25.043 END TEST xnvme_rpc 00:14:25.043 ************************************ 00:14:25.043 23:51:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.043 23:51:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:25.043 23:51:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:25.043 23:51:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.043 23:51:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.043 ************************************ 00:14:25.043 START TEST xnvme_bdevperf 00:14:25.043 ************************************ 00:14:25.043 23:51:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:25.043 23:51:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:25.043 23:51:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:25.043 23:51:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.043 23:51:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:25.043 23:51:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:25.043 23:51:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:25.043 23:51:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:25.043 { 00:14:25.043 "subsystems": [ 00:14:25.043 { 00:14:25.043 "subsystem": "bdev", 00:14:25.043 "config": [ 00:14:25.043 { 00:14:25.043 "params": { 00:14:25.043 "io_mechanism": "io_uring_cmd", 00:14:25.043 "conserve_cpu": true, 00:14:25.043 "filename": "/dev/ng0n1", 00:14:25.043 "name": "xnvme_bdev" 00:14:25.043 }, 00:14:25.043 "method": "bdev_xnvme_create" 00:14:25.043 }, 00:14:25.043 { 00:14:25.043 "method": "bdev_wait_for_examine" 00:14:25.043 } 00:14:25.043 ] 00:14:25.043 } 00:14:25.043 ] 00:14:25.043 } 00:14:25.043 [2024-12-05 23:51:57.542881] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:14:25.043 [2024-12-05 23:51:57.543231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71451 ] 00:14:25.043 [2024-12-05 23:51:57.707524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.303 [2024-12-05 23:51:57.837120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.564 Running I/O for 5 seconds... 00:14:27.449 36630.00 IOPS, 143.09 MiB/s [2024-12-05T23:52:01.548Z] 36053.50 IOPS, 140.83 MiB/s [2024-12-05T23:52:02.493Z] 35421.00 IOPS, 138.36 MiB/s [2024-12-05T23:52:03.434Z] 35140.75 IOPS, 137.27 MiB/s [2024-12-05T23:52:03.434Z] 35300.00 IOPS, 137.89 MiB/s 00:14:30.725 Latency(us) 00:14:30.725 [2024-12-05T23:52:03.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.725 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:30.725 xnvme_bdev : 5.01 35263.30 137.75 0.00 0.00 1810.12 740.43 16736.89 00:14:30.725 [2024-12-05T23:52:03.434Z] =================================================================================================================== 00:14:30.725 [2024-12-05T23:52:03.434Z] Total : 35263.30 137.75 0.00 0.00 1810.12 740.43 16736.89 00:14:31.298 23:52:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:31.298 23:52:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:31.298 23:52:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:31.298 23:52:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:31.298 23:52:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:31.298 { 00:14:31.298 "subsystems": [ 00:14:31.298 { 00:14:31.298 "subsystem": "bdev", 00:14:31.298 "config": [ 00:14:31.298 { 00:14:31.298 "params": { 00:14:31.298 "io_mechanism": "io_uring_cmd", 00:14:31.298 "conserve_cpu": true, 00:14:31.298 "filename": "/dev/ng0n1", 00:14:31.298 "name": "xnvme_bdev" 00:14:31.298 }, 00:14:31.298 "method": "bdev_xnvme_create" 00:14:31.298 }, 00:14:31.298 { 00:14:31.298 "method": "bdev_wait_for_examine" 00:14:31.298 } 00:14:31.298 ] 00:14:31.298 } 00:14:31.298 ] 00:14:31.298 } 00:14:31.560 [2024-12-05 23:52:04.019423] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:14:31.560 [2024-12-05 23:52:04.019570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71525 ] 00:14:31.561 [2024-12-05 23:52:04.183504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.822 [2024-12-05 23:52:04.311502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.085 Running I/O for 5 seconds... 00:14:33.967 21617.00 IOPS, 84.44 MiB/s [2024-12-05T23:52:07.618Z] 21020.00 IOPS, 82.11 MiB/s [2024-12-05T23:52:09.020Z] 19613.00 IOPS, 76.61 MiB/s [2024-12-05T23:52:09.965Z] 20368.50 IOPS, 79.56 MiB/s [2024-12-05T23:52:09.965Z] 20707.40 IOPS, 80.89 MiB/s 00:14:37.256 Latency(us) 00:14:37.256 [2024-12-05T23:52:09.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.256 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:37.256 xnvme_bdev : 5.01 20693.96 80.84 0.00 0.00 3086.41 69.71 216167.98 00:14:37.256 [2024-12-05T23:52:09.965Z] =================================================================================================================== 00:14:37.256 [2024-12-05T23:52:09.965Z] Total : 20693.96 80.84 0.00 0.00 3086.41 69.71 216167.98 00:14:37.878 23:52:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:37.878 23:52:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:37.878 23:52:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:37.878 23:52:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:37.878 23:52:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:37.878 { 00:14:37.878 "subsystems": [ 00:14:37.878 { 00:14:37.878 "subsystem": "bdev", 00:14:37.878 "config": [ 00:14:37.878 { 00:14:37.878 "params": { 00:14:37.878 "io_mechanism": "io_uring_cmd", 00:14:37.878 "conserve_cpu": true, 00:14:37.878 "filename": "/dev/ng0n1", 00:14:37.878 "name": "xnvme_bdev" 00:14:37.878 }, 00:14:37.878 "method": "bdev_xnvme_create" 00:14:37.878 }, 00:14:37.878 { 00:14:37.878 "method": "bdev_wait_for_examine" 00:14:37.878 } 00:14:37.878 ] 00:14:37.878 } 00:14:37.878 ] 00:14:37.878 } 00:14:37.878 [2024-12-05 23:52:10.479850] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:14:37.878 [2024-12-05 23:52:10.480303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71605 ] 00:14:38.140 [2024-12-05 23:52:10.645542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.140 [2024-12-05 23:52:10.776778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.401 Running I/O for 5 seconds... 00:14:40.721 80832.00 IOPS, 315.75 MiB/s [2024-12-05T23:52:14.373Z] 80384.00 IOPS, 314.00 MiB/s [2024-12-05T23:52:15.315Z] 79104.00 IOPS, 309.00 MiB/s [2024-12-05T23:52:16.258Z] 78352.00 IOPS, 306.06 MiB/s 00:14:43.549 Latency(us) 00:14:43.549 [2024-12-05T23:52:16.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.549 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:43.549 xnvme_bdev : 5.00 78193.11 305.44 0.00 0.00 815.03 450.56 2898.71 00:14:43.549 [2024-12-05T23:52:16.258Z] =================================================================================================================== 00:14:43.549 [2024-12-05T23:52:16.258Z] Total : 78193.11 305.44 0.00 0.00 815.03 450.56 2898.71 00:14:44.493 23:52:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.493 23:52:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:44.493 23:52:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:44.493 23:52:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:44.493 23:52:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:44.493 { 00:14:44.493 "subsystems": [ 00:14:44.493 { 00:14:44.493 "subsystem": "bdev", 00:14:44.493 "config": [ 00:14:44.493 { 00:14:44.493 "params": { 00:14:44.493 "io_mechanism": "io_uring_cmd", 00:14:44.493 "conserve_cpu": true, 00:14:44.493 "filename": "/dev/ng0n1", 00:14:44.493 "name": "xnvme_bdev" 00:14:44.493 }, 00:14:44.493 "method": "bdev_xnvme_create" 00:14:44.493 }, 00:14:44.493 { 00:14:44.493 "method": "bdev_wait_for_examine" 00:14:44.493 } 00:14:44.493 ] 00:14:44.493 } 00:14:44.493 ] 00:14:44.493 } 00:14:44.493 [2024-12-05 23:52:16.894647] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:14:44.493 [2024-12-05 23:52:16.894876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71680 ] 00:14:44.493 [2024-12-05 23:52:17.055671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.493 [2024-12-05 23:52:17.165712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.754 Running I/O for 5 seconds... 00:14:47.064 46440.00 IOPS, 181.41 MiB/s [2024-12-05T23:52:20.704Z] 54557.00 IOPS, 213.11 MiB/s [2024-12-05T23:52:21.680Z] 54935.00 IOPS, 214.59 MiB/s [2024-12-05T23:52:22.613Z] 55112.25 IOPS, 215.28 MiB/s [2024-12-05T23:52:22.613Z] 56026.00 IOPS, 218.85 MiB/s 00:14:49.904 Latency(us) 00:14:49.904 [2024-12-05T23:52:22.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.904 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:49.904 xnvme_bdev : 5.00 55991.97 218.72 0.00 0.00 1138.25 204.80 16636.06 00:14:49.904 [2024-12-05T23:52:22.613Z] =================================================================================================================== 00:14:49.904 [2024-12-05T23:52:22.613Z] Total : 55991.97 218.72 0.00 0.00 1138.25 204.80 16636.06 00:14:50.469 00:14:50.469 real 0m25.695s 00:14:50.469 user 0m17.369s 00:14:50.469 sys 0m5.871s 00:14:50.469 23:52:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.469 23:52:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:50.469 ************************************ 00:14:50.469 END TEST xnvme_bdevperf 00:14:50.469 ************************************ 00:14:50.773 23:52:23 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:50.773 23:52:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:50.773 23:52:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.773 23:52:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.773 ************************************ 00:14:50.773 START TEST xnvme_fio_plugin 00:14:50.773 ************************************ 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:50.773 23:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:50.773 { 00:14:50.773 "subsystems": [ 00:14:50.773 { 00:14:50.773 "subsystem": "bdev", 00:14:50.773 "config": [ 00:14:50.773 { 00:14:50.773 "params": { 00:14:50.773 "io_mechanism": "io_uring_cmd", 00:14:50.773 "conserve_cpu": true, 00:14:50.773 "filename": "/dev/ng0n1", 00:14:50.773 "name": "xnvme_bdev" 00:14:50.773 }, 00:14:50.773 "method": "bdev_xnvme_create" 00:14:50.773 }, 00:14:50.773 { 00:14:50.773 "method": "bdev_wait_for_examine" 00:14:50.773 } 00:14:50.773 ] 00:14:50.773 } 00:14:50.773 ] 00:14:50.773 } 00:14:50.773 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:50.773 fio-3.35 00:14:50.773 Starting 1 thread 00:14:57.355 00:14:57.355 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71793: Thu Dec 5 23:52:29 2024 00:14:57.355 read: IOPS=39.6k, BW=155MiB/s (162MB/s)(775MiB/5001msec) 00:14:57.355 slat (usec): min=2, max=129, avg= 3.99, stdev= 2.13 00:14:57.355 clat (usec): min=582, max=2908, avg=1452.70, stdev=321.23 00:14:57.355 lat (usec): min=585, max=2957, avg=1456.70, stdev=321.72 00:14:57.355 clat percentiles (usec): 00:14:57.355 | 1.00th=[ 742], 5.00th=[ 873], 10.00th=[ 988], 20.00th=[ 1188], 00:14:57.355 | 30.00th=[ 1319], 40.00th=[ 1401], 50.00th=[ 1483], 60.00th=[ 1532], 00:14:57.355 | 70.00th=[ 1614], 80.00th=[ 1696], 90.00th=[ 1827], 95.00th=[ 1958], 00:14:57.355 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 2671], 99.95th=[ 2737], 00:14:57.355 | 99.99th=[ 2802] 00:14:57.355 bw ( KiB/s): min=140288, max=219136, per=100.00%, avg=160199.11, stdev=25726.97, samples=9 00:14:57.355 iops : min=35072, max=54784, avg=40049.78, stdev=6431.74, samples=9 00:14:57.355 lat (usec) : 750=1.17%, 1000=9.28% 00:14:57.355 lat (msec) : 2=85.45%, 4=4.10% 00:14:57.355 cpu : usr=43.14%, sys=53.46%, ctx=17, majf=0, minf=762 00:14:57.355 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:57.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:57.355 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:57.355 issued rwts: total=198272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:57.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:57.355 00:14:57.355 Run status group 0 (all jobs): 00:14:57.355 READ: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=775MiB (812MB), run=5001-5001msec 00:14:57.355 ----------------------------------------------------- 00:14:57.355 Suppressions used: 00:14:57.355 count bytes template 00:14:57.355 1 11 /usr/src/fio/parse.c 00:14:57.355 1 8 libtcmalloc_minimal.so 00:14:57.355 1 904 libcrypto.so 00:14:57.355 ----------------------------------------------------- 00:14:57.355 00:14:57.355 23:52:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:57.617 23:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:57.617 { 00:14:57.617 "subsystems": [ 00:14:57.617 { 00:14:57.617 "subsystem": "bdev", 00:14:57.618 "config": [ 00:14:57.618 { 00:14:57.618 "params": { 00:14:57.618 "io_mechanism": "io_uring_cmd", 00:14:57.618 "conserve_cpu": true, 00:14:57.618 "filename": "/dev/ng0n1", 00:14:57.618 "name": "xnvme_bdev" 00:14:57.618 }, 00:14:57.618 "method": "bdev_xnvme_create" 00:14:57.618 }, 00:14:57.618 { 00:14:57.618 "method": "bdev_wait_for_examine" 00:14:57.618 } 00:14:57.618 ] 00:14:57.618 } 00:14:57.618 ] 00:14:57.618 } 00:14:57.618 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:57.618 fio-3.35 00:14:57.618 Starting 1 thread 00:15:04.266 00:15:04.266 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71884: Thu Dec 5 23:52:35 2024 00:15:04.266 write: IOPS=37.4k, BW=146MiB/s (153MB/s)(730MiB/5001msec); 0 zone resets 00:15:04.266 slat (usec): min=2, max=524, avg= 4.46, stdev= 2.74 00:15:04.266 clat (usec): min=214, max=23299, avg=1534.93, stdev=468.26 00:15:04.266 lat (usec): min=217, max=23302, avg=1539.39, stdev=468.56 00:15:04.266 clat percentiles (usec): 00:15:04.266 | 1.00th=[ 1057], 5.00th=[ 1188], 10.00th=[ 1254], 20.00th=[ 1336], 00:15:04.266 | 30.00th=[ 1385], 40.00th=[ 1450], 50.00th=[ 1500], 60.00th=[ 1549], 00:15:04.266 | 70.00th=[ 1614], 80.00th=[ 1696], 90.00th=[ 1827], 95.00th=[ 1958], 00:15:04.266 | 99.00th=[ 2343], 99.50th=[ 2606], 99.90th=[ 5538], 99.95th=[10814], 00:15:04.266 | 99.99th=[21365] 00:15:04.266 bw ( KiB/s): min=141904, max=161344, per=100.00%, avg=150119.11, stdev=5162.07, samples=9 00:15:04.266 iops : min=35476, max=40340, avg=37530.00, stdev=1291.53, samples=9 00:15:04.266 lat (usec) : 250=0.01%, 500=0.01%, 750=0.06%, 1000=0.31% 00:15:04.266 lat (msec) : 2=95.54%, 4=3.95%, 10=0.07%, 20=0.04%, 50=0.01% 00:15:04.266 cpu : usr=44.06%, sys=50.40%, ctx=10, majf=0, minf=763 00:15:04.266 IO depths : 1=1.4%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.5%, >=64=1.7% 00:15:04.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.266 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:04.266 issued rwts: total=0,186942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:04.266 00:15:04.266 Run status group 0 (all jobs): 00:15:04.266 WRITE: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=730MiB (766MB), run=5001-5001msec 00:15:04.527 ----------------------------------------------------- 00:15:04.527 Suppressions used: 00:15:04.527 count bytes template 00:15:04.527 1 11 /usr/src/fio/parse.c 00:15:04.527 1 8 libtcmalloc_minimal.so 00:15:04.527 1 904 libcrypto.so 00:15:04.527 ----------------------------------------------------- 00:15:04.527 00:15:04.527 00:15:04.527 real 0m13.836s 00:15:04.527 user 0m7.307s 00:15:04.527 sys 0m5.754s 00:15:04.527 23:52:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.527 ************************************ 00:15:04.527 END TEST xnvme_fio_plugin 00:15:04.527 ************************************ 00:15:04.527 23:52:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:04.527 Process with pid 71381 is not found 00:15:04.527 23:52:37 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71381 00:15:04.527 23:52:37 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71381 ']' 00:15:04.527 23:52:37 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71381 00:15:04.527 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71381) - No such process 00:15:04.527 23:52:37 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71381 is not found' 00:15:04.527 ************************************ 00:15:04.527 END TEST nvme_xnvme 00:15:04.527 ************************************ 00:15:04.527 00:15:04.527 real 3m28.891s 00:15:04.527 user 1m58.137s 00:15:04.527 sys 1m14.789s 00:15:04.527 23:52:37 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.527 23:52:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.527 23:52:37 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:04.528 23:52:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:04.528 23:52:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.528 23:52:37 -- common/autotest_common.sh@10 -- # set +x 00:15:04.528 ************************************ 00:15:04.528 START TEST blockdev_xnvme 00:15:04.528 ************************************ 00:15:04.528 23:52:37 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:04.789 * Looking for test storage... 00:15:04.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:04.789 23:52:37 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:04.789 23:52:37 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:04.789 23:52:37 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:04.789 23:52:37 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:04.789 23:52:37 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.790 23:52:37 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:04.790 23:52:37 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.790 23:52:37 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:04.790 23:52:37 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:04.790 23:52:37 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.790 23:52:37 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:04.790 23:52:37 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.790 23:52:37 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.790 23:52:37 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.790 23:52:37 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:04.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.790 --rc genhtml_branch_coverage=1 00:15:04.790 --rc genhtml_function_coverage=1 00:15:04.790 --rc genhtml_legend=1 00:15:04.790 --rc geninfo_all_blocks=1 00:15:04.790 --rc geninfo_unexecuted_blocks=1 00:15:04.790 00:15:04.790 ' 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:04.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.790 --rc genhtml_branch_coverage=1 00:15:04.790 --rc genhtml_function_coverage=1 00:15:04.790 --rc genhtml_legend=1 00:15:04.790 --rc geninfo_all_blocks=1 00:15:04.790 --rc geninfo_unexecuted_blocks=1 00:15:04.790 00:15:04.790 ' 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:04.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.790 --rc genhtml_branch_coverage=1 00:15:04.790 --rc genhtml_function_coverage=1 00:15:04.790 --rc genhtml_legend=1 00:15:04.790 --rc geninfo_all_blocks=1 00:15:04.790 --rc geninfo_unexecuted_blocks=1 00:15:04.790 00:15:04.790 ' 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:04.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.790 --rc genhtml_branch_coverage=1 00:15:04.790 --rc genhtml_function_coverage=1 00:15:04.790 --rc genhtml_legend=1 00:15:04.790 --rc geninfo_all_blocks=1 00:15:04.790 --rc geninfo_unexecuted_blocks=1 00:15:04.790 00:15:04.790 ' 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72018 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72018 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72018 ']' 00:15:04.790 23:52:37 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.790 23:52:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.790 [2024-12-05 23:52:37.434945] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:15:04.790 [2024-12-05 23:52:37.435132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72018 ] 00:15:05.052 [2024-12-05 23:52:37.594669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.052 [2024-12-05 23:52:37.746807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.996 23:52:38 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.996 23:52:38 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:05.996 23:52:38 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:05.996 23:52:38 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:05.996 23:52:38 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:05.996 23:52:38 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:05.996 23:52:38 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:06.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:07.144 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:07.144 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:07.144 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:07.144 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:07.144 nvme0n1 00:15:07.144 nvme0n2 00:15:07.144 nvme0n3 00:15:07.144 nvme1n1 00:15:07.144 nvme2n1 00:15:07.144 nvme3n1 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.144 23:52:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.144 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:07.145 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.145 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.145 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.145 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:07.145 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:07.145 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.145 23:52:39 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.145 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:07.145 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "acc7f6c8-b01e-48b0-98bc-a7dfbcab2471"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "acc7f6c8-b01e-48b0-98bc-a7dfbcab2471",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "8c785314-4824-4a6d-acce-aa6e08d856b9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c785314-4824-4a6d-acce-aa6e08d856b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "a8fd1668-4c4b-4d9e-afdb-1ea6a25b39da"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8fd1668-4c4b-4d9e-afdb-1ea6a25b39da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "87ab4ba0-85ac-46f4-a60a-532d22652c19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "87ab4ba0-85ac-46f4-a60a-532d22652c19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f78a8309-4516-4558-9a13-ec18aa8fa13e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f78a8309-4516-4558-9a13-ec18aa8fa13e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "94c5eacf-13f1-46bd-a670-8e0946a8716b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "94c5eacf-13f1-46bd-a670-8e0946a8716b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:07.145 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:07.406 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:07.406 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:07.406 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:07.406 23:52:39 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72018 00:15:07.406 23:52:39 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72018 ']' 00:15:07.406 23:52:39 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72018 00:15:07.406 23:52:39 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:07.406 23:52:39 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.406 23:52:39 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72018 00:15:07.406 23:52:39 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.406 killing process with pid 72018 00:15:07.406 23:52:39 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.406 23:52:39 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72018' 00:15:07.406 23:52:39 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72018 00:15:07.406 23:52:39 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72018 00:15:09.321 23:52:41 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:09.321 23:52:41 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:09.321 23:52:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:09.321 23:52:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.321 23:52:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.321 ************************************ 00:15:09.321 START TEST bdev_hello_world 00:15:09.321 ************************************ 00:15:09.321 23:52:41 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:09.321 [2024-12-05 23:52:41.829893] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:15:09.321 [2024-12-05 23:52:41.830258] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72302 ] 00:15:09.321 [2024-12-05 23:52:41.994136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.583 [2024-12-05 23:52:42.144304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.155 [2024-12-05 23:52:42.600866] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:10.155 [2024-12-05 23:52:42.601229] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:10.155 [2024-12-05 23:52:42.601272] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:10.155 [2024-12-05 23:52:42.603649] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:10.155 [2024-12-05 23:52:42.604163] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:10.155 [2024-12-05 23:52:42.604190] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:10.155 [2024-12-05 23:52:42.605055] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:10.155 00:15:10.155 [2024-12-05 23:52:42.605113] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:11.101 ************************************ 00:15:11.101 END TEST bdev_hello_world 00:15:11.101 ************************************ 00:15:11.101 00:15:11.101 real 0m1.726s 00:15:11.101 user 0m1.284s 00:15:11.101 sys 0m0.286s 00:15:11.101 23:52:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.101 23:52:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:11.101 23:52:43 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:11.101 23:52:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:11.101 23:52:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.101 23:52:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:11.101 ************************************ 00:15:11.101 START TEST bdev_bounds 00:15:11.101 ************************************ 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:11.101 Process bdevio pid: 72344 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72344 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72344' 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72344 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72344 ']' 00:15:11.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.101 23:52:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:11.101 [2024-12-05 23:52:43.614480] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:15:11.101 [2024-12-05 23:52:43.614632] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72344 ] 00:15:11.101 [2024-12-05 23:52:43.777919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:11.363 [2024-12-05 23:52:43.937858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.363 [2024-12-05 23:52:43.939146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.363 [2024-12-05 23:52:43.939232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.937 23:52:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.937 23:52:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:11.937 23:52:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:11.937 I/O targets: 00:15:11.937 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:11.937 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:11.937 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:11.937 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:11.937 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:11.937 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:11.937 00:15:11.937 00:15:11.937 CUnit - A unit testing framework for C - Version 2.1-3 00:15:11.937 http://cunit.sourceforge.net/ 00:15:11.937 00:15:11.937 00:15:11.937 Suite: bdevio tests on: nvme3n1 00:15:11.937 Test: blockdev write read block ...passed 00:15:11.938 Test: blockdev write zeroes read block ...passed 00:15:11.938 Test: blockdev write zeroes read no split ...passed 00:15:11.938 Test: blockdev write zeroes read split ...passed 00:15:12.199 Test: blockdev write zeroes read split partial ...passed 00:15:12.199 Test: blockdev reset ...passed 00:15:12.199 Test: blockdev write read 8 blocks ...passed 00:15:12.199 Test: blockdev write read size > 128k ...passed 00:15:12.199 Test: blockdev write read invalid size ...passed 00:15:12.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.199 Test: blockdev write read max offset ...passed 00:15:12.199 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.199 Test: blockdev writev readv 8 blocks ...passed 00:15:12.199 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.199 Test: blockdev writev readv block ...passed 00:15:12.199 Test: blockdev writev readv size > 128k ...passed 00:15:12.199 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.199 Test: blockdev comparev and writev ...passed 00:15:12.199 Test: blockdev nvme passthru rw ...passed 00:15:12.199 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.199 Test: blockdev nvme admin passthru ...passed 00:15:12.199 Test: blockdev copy ...passed 00:15:12.199 Suite: bdevio tests on: nvme2n1 00:15:12.199 Test: blockdev write read block ...passed 00:15:12.199 Test: blockdev write zeroes read block ...passed 00:15:12.199 Test: blockdev write zeroes read no split ...passed 00:15:12.199 Test: blockdev write zeroes read split ...passed 00:15:12.199 Test: blockdev write zeroes read split partial ...passed 00:15:12.199 Test: blockdev reset ...passed 00:15:12.199 Test: blockdev write read 8 blocks ...passed 00:15:12.199 Test: blockdev write read size > 128k ...passed 00:15:12.199 Test: blockdev write read invalid size ...passed 00:15:12.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.200 Test: blockdev write read max offset ...passed 00:15:12.200 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.200 Test: blockdev writev readv 8 blocks ...passed 00:15:12.200 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.200 Test: blockdev writev readv block ...passed 00:15:12.200 Test: blockdev writev readv size > 128k ...passed 00:15:12.200 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.200 Test: blockdev comparev and writev ...passed 00:15:12.200 Test: blockdev nvme passthru rw ...passed 00:15:12.200 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.200 Test: blockdev nvme admin passthru ...passed 00:15:12.200 Test: blockdev copy ...passed 00:15:12.200 Suite: bdevio tests on: nvme1n1 00:15:12.200 Test: blockdev write read block ...passed 00:15:12.200 Test: blockdev write zeroes read block ...passed 00:15:12.200 Test: blockdev write zeroes read no split ...passed 00:15:12.200 Test: blockdev write zeroes read split ...passed 00:15:12.200 Test: blockdev write zeroes read split partial ...passed 00:15:12.200 Test: blockdev reset ...passed 00:15:12.200 Test: blockdev write read 8 blocks ...passed 00:15:12.200 Test: blockdev write read size > 128k ...passed 00:15:12.200 Test: blockdev write read invalid size ...passed 00:15:12.200 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.200 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.200 Test: blockdev write read max offset ...passed 00:15:12.200 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.200 Test: blockdev writev readv 8 blocks ...passed 00:15:12.200 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.200 Test: blockdev writev readv block ...passed 00:15:12.200 Test: blockdev writev readv size > 128k ...passed 00:15:12.200 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.200 Test: blockdev comparev and writev ...passed 00:15:12.200 Test: blockdev nvme passthru rw ...passed 00:15:12.200 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.200 Test: blockdev nvme admin passthru ...passed 00:15:12.200 Test: blockdev copy ...passed 00:15:12.200 Suite: bdevio tests on: nvme0n3 00:15:12.200 Test: blockdev write read block ...passed 00:15:12.200 Test: blockdev write zeroes read block ...passed 00:15:12.200 Test: blockdev write zeroes read no split ...passed 00:15:12.200 Test: blockdev write zeroes read split ...passed 00:15:12.462 Test: blockdev write zeroes read split partial ...passed 00:15:12.462 Test: blockdev reset ...passed 00:15:12.462 Test: blockdev write read 8 blocks ...passed 00:15:12.462 Test: blockdev write read size > 128k ...passed 00:15:12.462 Test: blockdev write read invalid size ...passed 00:15:12.462 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.462 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.462 Test: blockdev write read max offset ...passed 00:15:12.462 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.462 Test: blockdev writev readv 8 blocks ...passed 00:15:12.462 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.462 Test: blockdev writev readv block ...passed 00:15:12.462 Test: blockdev writev readv size > 128k ...passed 00:15:12.462 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.462 Test: blockdev comparev and writev ...passed 00:15:12.462 Test: blockdev nvme passthru rw ...passed 00:15:12.462 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.462 Test: blockdev nvme admin passthru ...passed 00:15:12.462 Test: blockdev copy ...passed 00:15:12.462 Suite: bdevio tests on: nvme0n2 00:15:12.462 Test: blockdev write read block ...passed 00:15:12.462 Test: blockdev write zeroes read block ...passed 00:15:12.462 Test: blockdev write zeroes read no split ...passed 00:15:12.462 Test: blockdev write zeroes read split ...passed 00:15:12.462 Test: blockdev write zeroes read split partial ...passed 00:15:12.462 Test: blockdev reset ...passed 00:15:12.462 Test: blockdev write read 8 blocks ...passed 00:15:12.462 Test: blockdev write read size > 128k ...passed 00:15:12.462 Test: blockdev write read invalid size ...passed 00:15:12.462 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.462 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.462 Test: blockdev write read max offset ...passed 00:15:12.462 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.462 Test: blockdev writev readv 8 blocks ...passed 00:15:12.462 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.462 Test: blockdev writev readv block ...passed 00:15:12.462 Test: blockdev writev readv size > 128k ...passed 00:15:12.462 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.462 Test: blockdev comparev and writev ...passed 00:15:12.462 Test: blockdev nvme passthru rw ...passed 00:15:12.462 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.462 Test: blockdev nvme admin passthru ...passed 00:15:12.462 Test: blockdev copy ...passed 00:15:12.462 Suite: bdevio tests on: nvme0n1 00:15:12.462 Test: blockdev write read block ...passed 00:15:12.462 Test: blockdev write zeroes read block ...passed 00:15:12.462 Test: blockdev write zeroes read no split ...passed 00:15:12.462 Test: blockdev write zeroes read split ...passed 00:15:12.462 Test: blockdev write zeroes read split partial ...passed 00:15:12.462 Test: blockdev reset ...passed 00:15:12.462 Test: blockdev write read 8 blocks ...passed 00:15:12.462 Test: blockdev write read size > 128k ...passed 00:15:12.462 Test: blockdev write read invalid size ...passed 00:15:12.462 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:12.462 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:12.462 Test: blockdev write read max offset ...passed 00:15:12.462 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:12.462 Test: blockdev writev readv 8 blocks ...passed 00:15:12.462 Test: blockdev writev readv 30 x 1block ...passed 00:15:12.462 Test: blockdev writev readv block ...passed 00:15:12.462 Test: blockdev writev readv size > 128k ...passed 00:15:12.462 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:12.462 Test: blockdev comparev and writev ...passed 00:15:12.462 Test: blockdev nvme passthru rw ...passed 00:15:12.462 Test: blockdev nvme passthru vendor specific ...passed 00:15:12.462 Test: blockdev nvme admin passthru ...passed 00:15:12.462 Test: blockdev copy ...passed 00:15:12.462 00:15:12.462 Run Summary: Type Total Ran Passed Failed Inactive 00:15:12.462 suites 6 6 n/a 0 0 00:15:12.462 tests 138 138 138 0 0 00:15:12.462 asserts 780 780 780 0 n/a 00:15:12.462 00:15:12.462 Elapsed time = 1.311 seconds 00:15:12.462 0 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72344 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72344 ']' 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72344 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72344 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.462 killing process with pid 72344 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72344' 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72344 00:15:12.462 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72344 00:15:13.402 23:52:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:13.402 00:15:13.402 real 0m2.382s 00:15:13.402 user 0m5.621s 00:15:13.402 sys 0m0.451s 00:15:13.402 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.402 23:52:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:13.402 ************************************ 00:15:13.402 END TEST bdev_bounds 00:15:13.402 ************************************ 00:15:13.402 23:52:45 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:13.402 23:52:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:13.402 23:52:45 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.402 23:52:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.402 ************************************ 00:15:13.402 START TEST bdev_nbd 00:15:13.402 ************************************ 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72400 00:15:13.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72400 /var/tmp/spdk-nbd.sock 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72400 ']' 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:13.402 23:52:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:13.402 [2024-12-05 23:52:46.060387] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:15:13.402 [2024-12-05 23:52:46.060705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.661 [2024-12-05 23:52:46.222620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.661 [2024-12-05 23:52:46.334354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:14.227 23:52:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.485 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.486 1+0 records in 00:15:14.486 1+0 records out 00:15:14.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484187 s, 8.5 MB/s 00:15:14.486 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.486 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.486 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.486 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.486 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.486 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:14.486 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:14.486 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.744 1+0 records in 00:15:14.744 1+0 records out 00:15:14.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118048 s, 3.5 MB/s 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:14.744 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.002 1+0 records in 00:15:15.002 1+0 records out 00:15:15.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853294 s, 4.8 MB/s 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:15.002 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.261 1+0 records in 00:15:15.261 1+0 records out 00:15:15.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109367 s, 3.7 MB/s 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:15.261 23:52:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.520 1+0 records in 00:15:15.520 1+0 records out 00:15:15.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000965726 s, 4.2 MB/s 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:15.520 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.779 1+0 records in 00:15:15.779 1+0 records out 00:15:15.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00154506 s, 2.7 MB/s 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:15.779 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd0", 00:15:16.038 "bdev_name": "nvme0n1" 00:15:16.038 }, 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd1", 00:15:16.038 "bdev_name": "nvme0n2" 00:15:16.038 }, 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd2", 00:15:16.038 "bdev_name": "nvme0n3" 00:15:16.038 }, 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd3", 00:15:16.038 "bdev_name": "nvme1n1" 00:15:16.038 }, 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd4", 00:15:16.038 "bdev_name": "nvme2n1" 00:15:16.038 }, 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd5", 00:15:16.038 "bdev_name": "nvme3n1" 00:15:16.038 } 00:15:16.038 ]' 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd0", 00:15:16.038 "bdev_name": "nvme0n1" 00:15:16.038 }, 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd1", 00:15:16.038 "bdev_name": "nvme0n2" 00:15:16.038 }, 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd2", 00:15:16.038 "bdev_name": "nvme0n3" 00:15:16.038 }, 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd3", 00:15:16.038 "bdev_name": "nvme1n1" 00:15:16.038 }, 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd4", 00:15:16.038 "bdev_name": "nvme2n1" 00:15:16.038 }, 00:15:16.038 { 00:15:16.038 "nbd_device": "/dev/nbd5", 00:15:16.038 "bdev_name": "nvme3n1" 00:15:16.038 } 00:15:16.038 ]' 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.038 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:16.383 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:16.383 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:16.383 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:16.383 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.383 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.383 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:16.383 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.383 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.383 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.383 23:52:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:16.383 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:16.383 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:16.383 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:16.383 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.383 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.383 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:16.383 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.383 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.383 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.383 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:16.661 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:16.661 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:16.661 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:16.661 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.661 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.661 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:16.661 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.661 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.661 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.661 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:16.919 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:16.919 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:16.919 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:16.920 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.920 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.920 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:16.920 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.920 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.920 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.920 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:16.920 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:16.920 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:16.920 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.179 23:52:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:17.437 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:17.438 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:17.696 /dev/nbd0 00:15:17.696 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:17.696 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:17.696 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:17.696 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:17.696 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.696 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.697 1+0 records in 00:15:17.697 1+0 records out 00:15:17.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308608 s, 13.3 MB/s 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:17.697 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:17.955 /dev/nbd1 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.955 1+0 records in 00:15:17.955 1+0 records out 00:15:17.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363507 s, 11.3 MB/s 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:17.955 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:18.214 /dev/nbd10 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.214 1+0 records in 00:15:18.214 1+0 records out 00:15:18.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442547 s, 9.3 MB/s 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:18.214 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:18.473 /dev/nbd11 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.473 1+0 records in 00:15:18.473 1+0 records out 00:15:18.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037804 s, 10.8 MB/s 00:15:18.473 23:52:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.473 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.473 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.473 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.473 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.473 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.473 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:18.473 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:18.731 /dev/nbd12 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.731 1+0 records in 00:15:18.731 1+0 records out 00:15:18.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346474 s, 11.8 MB/s 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:18.731 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:18.989 /dev/nbd13 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.989 1+0 records in 00:15:18.989 1+0 records out 00:15:18.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662965 s, 6.2 MB/s 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:18.989 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:19.247 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:19.247 { 00:15:19.247 "nbd_device": "/dev/nbd0", 00:15:19.247 "bdev_name": "nvme0n1" 00:15:19.247 }, 00:15:19.247 { 00:15:19.247 "nbd_device": "/dev/nbd1", 00:15:19.247 "bdev_name": "nvme0n2" 00:15:19.247 }, 00:15:19.247 { 00:15:19.247 "nbd_device": "/dev/nbd10", 00:15:19.247 "bdev_name": "nvme0n3" 00:15:19.247 }, 00:15:19.247 { 00:15:19.247 "nbd_device": "/dev/nbd11", 00:15:19.247 "bdev_name": "nvme1n1" 00:15:19.247 }, 00:15:19.247 { 00:15:19.247 "nbd_device": "/dev/nbd12", 00:15:19.247 "bdev_name": "nvme2n1" 00:15:19.247 }, 00:15:19.247 { 00:15:19.247 "nbd_device": "/dev/nbd13", 00:15:19.247 "bdev_name": "nvme3n1" 00:15:19.247 } 00:15:19.247 ]' 00:15:19.247 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:19.247 { 00:15:19.247 "nbd_device": "/dev/nbd0", 00:15:19.247 "bdev_name": "nvme0n1" 00:15:19.247 }, 00:15:19.247 { 00:15:19.247 "nbd_device": "/dev/nbd1", 00:15:19.247 "bdev_name": "nvme0n2" 00:15:19.247 }, 00:15:19.247 { 00:15:19.247 "nbd_device": "/dev/nbd10", 00:15:19.247 "bdev_name": "nvme0n3" 00:15:19.247 }, 00:15:19.247 { 00:15:19.247 "nbd_device": "/dev/nbd11", 00:15:19.247 "bdev_name": "nvme1n1" 00:15:19.247 }, 00:15:19.247 { 00:15:19.248 "nbd_device": "/dev/nbd12", 00:15:19.248 "bdev_name": "nvme2n1" 00:15:19.248 }, 00:15:19.248 { 00:15:19.248 "nbd_device": "/dev/nbd13", 00:15:19.248 "bdev_name": "nvme3n1" 00:15:19.248 } 00:15:19.248 ]' 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:19.248 /dev/nbd1 00:15:19.248 /dev/nbd10 00:15:19.248 /dev/nbd11 00:15:19.248 /dev/nbd12 00:15:19.248 /dev/nbd13' 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:19.248 /dev/nbd1 00:15:19.248 /dev/nbd10 00:15:19.248 /dev/nbd11 00:15:19.248 /dev/nbd12 00:15:19.248 /dev/nbd13' 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:19.248 256+0 records in 00:15:19.248 256+0 records out 00:15:19.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00998002 s, 105 MB/s 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:19.248 256+0 records in 00:15:19.248 256+0 records out 00:15:19.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.060707 s, 17.3 MB/s 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:19.248 256+0 records in 00:15:19.248 256+0 records out 00:15:19.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0589119 s, 17.8 MB/s 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:19.248 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:19.506 256+0 records in 00:15:19.506 256+0 records out 00:15:19.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.058092 s, 18.1 MB/s 00:15:19.506 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:19.506 23:52:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:19.506 256+0 records in 00:15:19.506 256+0 records out 00:15:19.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0619463 s, 16.9 MB/s 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:19.506 256+0 records in 00:15:19.506 256+0 records out 00:15:19.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0580676 s, 18.1 MB/s 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:19.506 256+0 records in 00:15:19.506 256+0 records out 00:15:19.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0704953 s, 14.9 MB/s 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:19.506 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:19.766 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:19.766 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:19.766 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:19.766 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:19.766 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.766 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:19.766 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.767 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:20.027 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.027 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.027 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.027 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.027 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.027 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.027 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:20.027 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.027 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.027 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:20.289 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:20.289 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:20.289 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:20.289 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.289 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.289 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:20.289 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:20.289 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.289 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.289 23:52:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.550 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.809 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:21.068 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:21.326 malloc_lvol_verify 00:15:21.326 23:52:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:21.585 8e58c0ca-c658-42ec-8f50-93a8d7b953c1 00:15:21.585 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:21.843 7fa8c18c-4e70-4d60-b29c-d60174edbc5f 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:21.843 /dev/nbd0 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:21.843 mke2fs 1.47.0 (5-Feb-2023) 00:15:21.843 Discarding device blocks: 0/4096 done 00:15:21.843 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:21.843 00:15:21.843 Allocating group tables: 0/1 done 00:15:21.843 Writing inode tables: 0/1 done 00:15:21.843 Creating journal (1024 blocks): done 00:15:21.843 Writing superblocks and filesystem accounting information: 0/1 done 00:15:21.843 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.843 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:21.844 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.844 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72400 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72400 ']' 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72400 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72400 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.102 killing process with pid 72400 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72400' 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72400 00:15:22.102 23:52:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72400 00:15:23.037 23:52:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:23.037 00:15:23.037 real 0m9.395s 00:15:23.037 user 0m13.521s 00:15:23.037 sys 0m3.127s 00:15:23.037 23:52:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.037 ************************************ 00:15:23.037 END TEST bdev_nbd 00:15:23.037 ************************************ 00:15:23.037 23:52:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:23.038 23:52:55 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:23.038 23:52:55 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:23.038 23:52:55 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:23.038 23:52:55 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:23.038 23:52:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:23.038 23:52:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.038 23:52:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.038 ************************************ 00:15:23.038 START TEST bdev_fio 00:15:23.038 ************************************ 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:23.038 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:23.038 ************************************ 00:15:23.038 START TEST bdev_fio_rw_verify 00:15:23.038 ************************************ 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:23.038 23:52:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:23.038 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.038 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.038 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.038 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.038 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.038 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:23.038 fio-3.35 00:15:23.038 Starting 6 threads 00:15:35.240 00:15:35.240 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72789: Thu Dec 5 23:53:06 2024 00:15:35.240 read: IOPS=23.8k, BW=92.9MiB/s (97.4MB/s)(929MiB/10003msec) 00:15:35.240 slat (usec): min=2, max=2086, avg= 5.55, stdev=13.48 00:15:35.240 clat (usec): min=82, max=9816, avg=778.55, stdev=610.30 00:15:35.240 lat (usec): min=85, max=9824, avg=784.10, stdev=611.08 00:15:35.240 clat percentiles (usec): 00:15:35.240 | 50.000th=[ 570], 99.000th=[ 2802], 99.900th=[ 3884], 99.990th=[ 5211], 00:15:35.240 | 99.999th=[ 9765] 00:15:35.240 write: IOPS=24.1k, BW=94.2MiB/s (98.8MB/s)(943MiB/10003msec); 0 zone resets 00:15:35.240 slat (usec): min=6, max=4477, avg=32.64, stdev=100.43 00:15:35.240 clat (usec): min=61, max=6698, avg=955.68, stdev=668.16 00:15:35.240 lat (usec): min=76, max=6717, avg=988.31, stdev=681.11 00:15:35.240 clat percentiles (usec): 00:15:35.240 | 50.000th=[ 742], 99.000th=[ 3163], 99.900th=[ 4424], 99.990th=[ 5342], 00:15:35.240 | 99.999th=[ 6521] 00:15:35.240 bw ( KiB/s): min=50626, max=174178, per=100.00%, avg=97649.68, stdev=5967.30, samples=114 00:15:35.240 iops : min=12655, max=43543, avg=24411.21, stdev=1491.81, samples=114 00:15:35.240 lat (usec) : 100=0.06%, 250=10.03%, 500=25.51%, 750=20.21%, 1000=12.07% 00:15:35.240 lat (msec) : 2=25.66%, 4=6.31%, 10=0.16% 00:15:35.240 cpu : usr=42.25%, sys=33.33%, ctx=7633, majf=0, minf=21124 00:15:35.240 IO depths : 1=11.1%, 2=23.5%, 4=51.4%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:35.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.240 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.240 issued rwts: total=237834,241311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.240 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:35.240 00:15:35.240 Run status group 0 (all jobs): 00:15:35.240 READ: bw=92.9MiB/s (97.4MB/s), 92.9MiB/s-92.9MiB/s (97.4MB/s-97.4MB/s), io=929MiB (974MB), run=10003-10003msec 00:15:35.240 WRITE: bw=94.2MiB/s (98.8MB/s), 94.2MiB/s-94.2MiB/s (98.8MB/s-98.8MB/s), io=943MiB (988MB), run=10003-10003msec 00:15:35.240 ----------------------------------------------------- 00:15:35.240 Suppressions used: 00:15:35.240 count bytes template 00:15:35.240 6 48 /usr/src/fio/parse.c 00:15:35.240 3317 318432 /usr/src/fio/iolog.c 00:15:35.240 1 8 libtcmalloc_minimal.so 00:15:35.240 1 904 libcrypto.so 00:15:35.240 ----------------------------------------------------- 00:15:35.240 00:15:35.240 00:15:35.240 real 0m11.905s 00:15:35.240 user 0m26.856s 00:15:35.240 sys 0m20.277s 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.240 ************************************ 00:15:35.240 END TEST bdev_fio_rw_verify 00:15:35.240 ************************************ 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:35.240 23:53:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:35.241 23:53:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "acc7f6c8-b01e-48b0-98bc-a7dfbcab2471"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "acc7f6c8-b01e-48b0-98bc-a7dfbcab2471",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "8c785314-4824-4a6d-acce-aa6e08d856b9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c785314-4824-4a6d-acce-aa6e08d856b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "a8fd1668-4c4b-4d9e-afdb-1ea6a25b39da"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8fd1668-4c4b-4d9e-afdb-1ea6a25b39da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "87ab4ba0-85ac-46f4-a60a-532d22652c19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "87ab4ba0-85ac-46f4-a60a-532d22652c19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f78a8309-4516-4558-9a13-ec18aa8fa13e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f78a8309-4516-4558-9a13-ec18aa8fa13e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "94c5eacf-13f1-46bd-a670-8e0946a8716b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "94c5eacf-13f1-46bd-a670-8e0946a8716b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:35.241 23:53:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:35.241 23:53:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:35.241 /home/vagrant/spdk_repo/spdk 00:15:35.241 23:53:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:35.241 23:53:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:35.241 23:53:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:35.241 00:15:35.241 real 0m12.078s 00:15:35.241 user 0m26.933s 00:15:35.241 sys 0m20.355s 00:15:35.241 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.241 23:53:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:35.241 ************************************ 00:15:35.241 END TEST bdev_fio 00:15:35.241 ************************************ 00:15:35.241 23:53:07 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:35.241 23:53:07 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:35.241 23:53:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:35.241 23:53:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.241 23:53:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.241 ************************************ 00:15:35.241 START TEST bdev_verify 00:15:35.241 ************************************ 00:15:35.241 23:53:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:35.241 [2024-12-05 23:53:07.653774] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:15:35.241 [2024-12-05 23:53:07.653919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72958 ] 00:15:35.241 [2024-12-05 23:53:07.819821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:35.503 [2024-12-05 23:53:07.970412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.503 [2024-12-05 23:53:07.970485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.763 Running I/O for 5 seconds... 00:15:38.093 25440.00 IOPS, 99.38 MiB/s [2024-12-05T23:53:11.746Z] 24080.00 IOPS, 94.06 MiB/s [2024-12-05T23:53:13.131Z] 23946.67 IOPS, 93.54 MiB/s [2024-12-05T23:53:13.701Z] 24280.00 IOPS, 94.84 MiB/s [2024-12-05T23:53:13.701Z] 24115.20 IOPS, 94.20 MiB/s 00:15:40.992 Latency(us) 00:15:40.992 [2024-12-05T23:53:13.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.992 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.992 Verification LBA range: start 0x0 length 0x80000 00:15:40.992 nvme0n1 : 5.06 1972.56 7.71 0.00 0.00 64770.41 9275.86 71787.13 00:15:40.992 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.992 Verification LBA range: start 0x80000 length 0x80000 00:15:40.992 nvme0n1 : 5.05 1926.69 7.53 0.00 0.00 66320.29 10183.29 62511.26 00:15:40.992 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.992 Verification LBA range: start 0x0 length 0x80000 00:15:40.992 nvme0n2 : 5.07 1917.99 7.49 0.00 0.00 66492.77 10183.29 71787.13 00:15:40.992 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.992 Verification LBA range: start 0x80000 length 0x80000 00:15:40.992 nvme0n2 : 5.05 1875.44 7.33 0.00 0.00 68026.39 10838.65 63721.16 00:15:40.992 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.992 Verification LBA range: start 0x0 length 0x80000 00:15:40.992 nvme0n3 : 5.08 1913.34 7.47 0.00 0.00 66534.02 8015.56 62511.26 00:15:40.992 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.992 Verification LBA range: start 0x80000 length 0x80000 00:15:40.992 nvme0n3 : 5.05 1874.88 7.32 0.00 0.00 67933.83 11494.01 70980.53 00:15:40.992 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.992 Verification LBA range: start 0x0 length 0x20000 00:15:40.992 nvme1n1 : 5.08 1914.60 7.48 0.00 0.00 66378.07 7813.91 66544.25 00:15:40.992 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.992 Verification LBA range: start 0x20000 length 0x20000 00:15:40.992 nvme1n1 : 5.04 1853.54 7.24 0.00 0.00 68600.58 12603.08 71787.13 00:15:40.992 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.992 Verification LBA range: start 0x0 length 0xa0000 00:15:40.992 nvme2n1 : 5.09 1934.60 7.56 0.00 0.00 65555.97 6805.66 74206.92 00:15:40.992 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.993 Verification LBA range: start 0xa0000 length 0xa0000 00:15:40.993 nvme2n1 : 5.06 1871.94 7.31 0.00 0.00 67811.97 9427.10 61301.37 00:15:40.993 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:40.993 Verification LBA range: start 0x0 length 0xbd0bd 00:15:40.993 nvme3n1 : 5.08 2420.44 9.45 0.00 0.00 52247.22 6301.54 65334.35 00:15:40.993 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.993 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:40.993 nvme3n1 : 5.06 2451.49 9.58 0.00 0.00 51638.39 5444.53 58074.98 00:15:40.993 [2024-12-05T23:53:13.702Z] =================================================================================================================== 00:15:40.993 [2024-12-05T23:53:13.702Z] Total : 23927.50 93.47 0.00 0.00 63787.39 5444.53 74206.92 00:15:41.934 00:15:41.934 real 0m6.841s 00:15:41.934 user 0m11.054s 00:15:41.934 sys 0m1.494s 00:15:41.934 23:53:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.934 23:53:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:41.934 ************************************ 00:15:41.934 END TEST bdev_verify 00:15:41.934 ************************************ 00:15:41.934 23:53:14 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:41.934 23:53:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:41.934 23:53:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.934 23:53:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.934 ************************************ 00:15:41.934 START TEST bdev_verify_big_io 00:15:41.934 ************************************ 00:15:41.934 23:53:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:41.934 [2024-12-05 23:53:14.532977] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:15:41.935 [2024-12-05 23:53:14.533096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73067 ] 00:15:42.195 [2024-12-05 23:53:14.696132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:42.195 [2024-12-05 23:53:14.813597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.195 [2024-12-05 23:53:14.813688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.766 Running I/O for 5 seconds... 00:15:48.891 1686.00 IOPS, 105.38 MiB/s [2024-12-05T23:53:22.199Z] 2643.50 IOPS, 165.22 MiB/s [2024-12-05T23:53:22.199Z] 3053.67 IOPS, 190.85 MiB/s 00:15:49.490 Latency(us) 00:15:49.490 [2024-12-05T23:53:22.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.490 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.490 Verification LBA range: start 0x0 length 0x8000 00:15:49.490 nvme0n1 : 6.10 102.37 6.40 0.00 0.00 1181657.67 7763.50 1884210.41 00:15:49.490 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0x8000 length 0x8000 00:15:49.491 nvme0n1 : 5.70 131.85 8.24 0.00 0.00 947548.80 7158.55 1025991.29 00:15:49.491 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0x0 length 0x8000 00:15:49.491 nvme0n2 : 6.10 81.33 5.08 0.00 0.00 1420592.81 137121.48 1535760.54 00:15:49.491 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0x8000 length 0x8000 00:15:49.491 nvme0n2 : 5.71 100.93 6.31 0.00 0.00 1178624.35 104857.60 2064888.12 00:15:49.491 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0x0 length 0x8000 00:15:49.491 nvme0n3 : 6.10 80.81 5.05 0.00 0.00 1335963.38 124215.93 2439149.10 00:15:49.491 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0x8000 length 0x8000 00:15:49.491 nvme0n3 : 5.72 109.51 6.84 0.00 0.00 1057648.28 101227.91 1755154.90 00:15:49.491 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0x0 length 0x2000 00:15:49.491 nvme1n1 : 6.17 125.81 7.86 0.00 0.00 816467.89 9326.28 1000180.18 00:15:49.491 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0x2000 length 0x2000 00:15:49.491 nvme1n1 : 5.73 108.95 6.81 0.00 0.00 1049025.83 14821.22 2193943.63 00:15:49.491 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0x0 length 0xa000 00:15:49.491 nvme2n1 : 6.45 159.89 9.99 0.00 0.00 613537.18 22080.59 1245385.65 00:15:49.491 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0xa000 length 0xa000 00:15:49.491 nvme2n1 : 6.11 125.64 7.85 0.00 0.00 852652.08 623.85 1342177.28 00:15:49.491 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0x0 length 0xbd0b 00:15:49.491 nvme3n1 : 6.62 248.11 15.51 0.00 0.00 380834.72 1726.62 2826315.62 00:15:49.491 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:49.491 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:49.491 nvme3n1 : 5.94 191.17 11.95 0.00 0.00 562812.33 2318.97 1006632.96 00:15:49.491 [2024-12-05T23:53:22.200Z] =================================================================================================================== 00:15:49.491 [2024-12-05T23:53:22.200Z] Total : 1566.38 97.90 0.00 0.00 837610.72 623.85 2826315.62 00:15:50.446 00:15:50.446 real 0m8.415s 00:15:50.446 user 0m15.483s 00:15:50.446 sys 0m0.488s 00:15:50.446 23:53:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.446 ************************************ 00:15:50.446 END TEST bdev_verify_big_io 00:15:50.446 ************************************ 00:15:50.446 23:53:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.446 23:53:22 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:50.446 23:53:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:50.446 23:53:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.446 23:53:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.446 ************************************ 00:15:50.446 START TEST bdev_write_zeroes 00:15:50.446 ************************************ 00:15:50.446 23:53:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:50.446 [2024-12-05 23:53:23.020124] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:15:50.446 [2024-12-05 23:53:23.020246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73178 ] 00:15:50.707 [2024-12-05 23:53:23.182842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.708 [2024-12-05 23:53:23.306629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.280 Running I/O for 1 seconds... 00:15:52.226 68768.00 IOPS, 268.62 MiB/s 00:15:52.226 Latency(us) 00:15:52.226 [2024-12-05T23:53:24.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.226 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:52.226 nvme0n1 : 1.02 11175.30 43.65 0.00 0.00 11442.77 7007.31 23693.78 00:15:52.226 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:52.226 nvme0n2 : 1.02 11120.69 43.44 0.00 0.00 11488.50 7208.96 23088.84 00:15:52.226 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:52.226 nvme0n3 : 1.02 11161.41 43.60 0.00 0.00 11436.00 6704.84 23895.43 00:15:52.226 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:52.226 nvme1n1 : 1.02 11148.60 43.55 0.00 0.00 11440.20 6755.25 23996.26 00:15:52.226 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:52.226 nvme2n1 : 1.02 11135.85 43.50 0.00 0.00 11443.95 6805.66 24197.91 00:15:52.226 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:52.226 nvme3n1 : 1.03 12465.28 48.69 0.00 0.00 10214.26 5293.29 20870.70 00:15:52.226 [2024-12-05T23:53:24.935Z] =================================================================================================================== 00:15:52.226 [2024-12-05T23:53:24.935Z] Total : 68207.13 266.43 0.00 0.00 11223.03 5293.29 24197.91 00:15:53.167 00:15:53.168 real 0m2.635s 00:15:53.168 user 0m1.941s 00:15:53.168 sys 0m0.492s 00:15:53.168 23:53:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.168 ************************************ 00:15:53.168 23:53:25 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:53.168 END TEST bdev_write_zeroes 00:15:53.168 ************************************ 00:15:53.168 23:53:25 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:53.168 23:53:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:53.168 23:53:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.168 23:53:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.168 ************************************ 00:15:53.168 START TEST bdev_json_nonenclosed 00:15:53.168 ************************************ 00:15:53.168 23:53:25 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:53.168 [2024-12-05 23:53:25.732488] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:15:53.168 [2024-12-05 23:53:25.732624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73235 ] 00:15:53.428 [2024-12-05 23:53:25.898929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.428 [2024-12-05 23:53:26.024811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.428 [2024-12-05 23:53:26.024906] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:53.428 [2024-12-05 23:53:26.024926] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:53.429 [2024-12-05 23:53:26.024937] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:53.690 00:15:53.690 real 0m0.560s 00:15:53.690 user 0m0.336s 00:15:53.690 sys 0m0.118s 00:15:53.690 23:53:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.691 23:53:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:53.691 ************************************ 00:15:53.691 END TEST bdev_json_nonenclosed 00:15:53.691 ************************************ 00:15:53.691 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:53.691 23:53:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:53.691 23:53:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.691 23:53:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.691 ************************************ 00:15:53.691 START TEST bdev_json_nonarray 00:15:53.691 ************************************ 00:15:53.691 23:53:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:53.691 [2024-12-05 23:53:26.360365] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:15:53.691 [2024-12-05 23:53:26.360519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73262 ] 00:15:53.953 [2024-12-05 23:53:26.527329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.953 [2024-12-05 23:53:26.654863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.953 [2024-12-05 23:53:26.654992] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:53.953 [2024-12-05 23:53:26.655012] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:53.953 [2024-12-05 23:53:26.655023] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:54.214 00:15:54.214 real 0m0.565s 00:15:54.214 user 0m0.333s 00:15:54.214 sys 0m0.124s 00:15:54.214 ************************************ 00:15:54.214 23:53:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.214 23:53:26 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:54.214 END TEST bdev_json_nonarray 00:15:54.214 ************************************ 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:54.214 23:53:26 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:54.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:12.955 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:12.955 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.156 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.156 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.156 00:16:17.156 real 1m11.931s 00:16:17.156 user 1m22.267s 00:16:17.156 sys 1m28.347s 00:16:17.156 ************************************ 00:16:17.156 END TEST blockdev_xnvme 00:16:17.156 ************************************ 00:16:17.156 23:53:49 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.156 23:53:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.156 23:53:49 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:17.156 23:53:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.156 23:53:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.156 23:53:49 -- common/autotest_common.sh@10 -- # set +x 00:16:17.156 ************************************ 00:16:17.156 START TEST ublk 00:16:17.156 ************************************ 00:16:17.156 23:53:49 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:17.156 * Looking for test storage... 00:16:17.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:17.156 23:53:49 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:17.156 23:53:49 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:17.156 23:53:49 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:17.156 23:53:49 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:17.156 23:53:49 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.156 23:53:49 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.156 23:53:49 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.156 23:53:49 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.156 23:53:49 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.156 23:53:49 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.156 23:53:49 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.156 23:53:49 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.156 23:53:49 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.156 23:53:49 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.156 23:53:49 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.156 23:53:49 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:17.156 23:53:49 ublk -- scripts/common.sh@345 -- # : 1 00:16:17.156 23:53:49 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.156 23:53:49 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.156 23:53:49 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:17.156 23:53:49 ublk -- scripts/common.sh@353 -- # local d=1 00:16:17.156 23:53:49 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.156 23:53:49 ublk -- scripts/common.sh@355 -- # echo 1 00:16:17.156 23:53:49 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.156 23:53:49 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:17.156 23:53:49 ublk -- scripts/common.sh@353 -- # local d=2 00:16:17.156 23:53:49 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.156 23:53:49 ublk -- scripts/common.sh@355 -- # echo 2 00:16:17.156 23:53:49 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.156 23:53:49 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.156 23:53:49 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.156 23:53:49 ublk -- scripts/common.sh@368 -- # return 0 00:16:17.157 23:53:49 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.157 23:53:49 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:17.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.157 --rc genhtml_branch_coverage=1 00:16:17.157 --rc genhtml_function_coverage=1 00:16:17.157 --rc genhtml_legend=1 00:16:17.157 --rc geninfo_all_blocks=1 00:16:17.157 --rc geninfo_unexecuted_blocks=1 00:16:17.157 00:16:17.157 ' 00:16:17.157 23:53:49 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:17.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.157 --rc genhtml_branch_coverage=1 00:16:17.157 --rc genhtml_function_coverage=1 00:16:17.157 --rc genhtml_legend=1 00:16:17.157 --rc geninfo_all_blocks=1 00:16:17.157 --rc geninfo_unexecuted_blocks=1 00:16:17.157 00:16:17.157 ' 00:16:17.157 23:53:49 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:17.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.157 --rc genhtml_branch_coverage=1 00:16:17.157 --rc genhtml_function_coverage=1 00:16:17.157 --rc genhtml_legend=1 00:16:17.157 --rc geninfo_all_blocks=1 00:16:17.157 --rc geninfo_unexecuted_blocks=1 00:16:17.157 00:16:17.157 ' 00:16:17.157 23:53:49 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:17.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.157 --rc genhtml_branch_coverage=1 00:16:17.157 --rc genhtml_function_coverage=1 00:16:17.157 --rc genhtml_legend=1 00:16:17.157 --rc geninfo_all_blocks=1 00:16:17.157 --rc geninfo_unexecuted_blocks=1 00:16:17.157 00:16:17.157 ' 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:17.157 23:53:49 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:17.157 23:53:49 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:17.157 23:53:49 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:17.157 23:53:49 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:17.157 23:53:49 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:17.157 23:53:49 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:17.157 23:53:49 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:17.157 23:53:49 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:17.157 23:53:49 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:17.157 23:53:49 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.157 23:53:49 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.157 23:53:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.157 ************************************ 00:16:17.157 START TEST test_save_ublk_config 00:16:17.157 ************************************ 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73581 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73581 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73581 ']' 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:17.157 23:53:49 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:17.157 [2024-12-05 23:53:49.424794] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:16:17.157 [2024-12-05 23:53:49.424938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73581 ] 00:16:17.157 [2024-12-05 23:53:49.580414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.157 [2024-12-05 23:53:49.674551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.729 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.729 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:17.729 23:53:50 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:17.729 23:53:50 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:17.729 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.729 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:17.729 [2024-12-05 23:53:50.323991] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:17.729 [2024-12-05 23:53:50.324902] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:17.729 malloc0 00:16:17.729 [2024-12-05 23:53:50.396154] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:17.729 [2024-12-05 23:53:50.396256] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:17.729 [2024-12-05 23:53:50.396267] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:17.729 [2024-12-05 23:53:50.396275] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:17.729 [2024-12-05 23:53:50.405097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:17.729 [2024-12-05 23:53:50.405130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:17.729 [2024-12-05 23:53:50.412005] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:17.729 [2024-12-05 23:53:50.412132] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:17.729 [2024-12-05 23:53:50.428999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:17.991 0 00:16:17.991 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.991 23:53:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:17.991 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.991 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.253 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.253 23:53:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:18.253 "subsystems": [ 00:16:18.253 { 00:16:18.253 "subsystem": "fsdev", 00:16:18.253 "config": [ 00:16:18.253 { 00:16:18.253 "method": "fsdev_set_opts", 00:16:18.253 "params": { 00:16:18.253 "fsdev_io_pool_size": 65535, 00:16:18.253 "fsdev_io_cache_size": 256 00:16:18.253 } 00:16:18.253 } 00:16:18.253 ] 00:16:18.253 }, 00:16:18.253 { 00:16:18.253 "subsystem": "keyring", 00:16:18.253 "config": [] 00:16:18.253 }, 00:16:18.253 { 00:16:18.253 "subsystem": "iobuf", 00:16:18.253 "config": [ 00:16:18.253 { 00:16:18.253 "method": "iobuf_set_options", 00:16:18.253 "params": { 00:16:18.253 "small_pool_count": 8192, 00:16:18.253 "large_pool_count": 1024, 00:16:18.253 "small_bufsize": 8192, 00:16:18.253 "large_bufsize": 135168, 00:16:18.253 "enable_numa": false 00:16:18.253 } 00:16:18.253 } 00:16:18.253 ] 00:16:18.253 }, 00:16:18.253 { 00:16:18.253 "subsystem": "sock", 00:16:18.253 "config": [ 00:16:18.253 { 00:16:18.253 "method": "sock_set_default_impl", 00:16:18.253 "params": { 00:16:18.253 "impl_name": "posix" 00:16:18.253 } 00:16:18.253 }, 00:16:18.253 { 00:16:18.253 "method": "sock_impl_set_options", 00:16:18.253 "params": { 00:16:18.253 "impl_name": "ssl", 00:16:18.253 "recv_buf_size": 4096, 00:16:18.253 "send_buf_size": 4096, 00:16:18.253 "enable_recv_pipe": true, 00:16:18.253 "enable_quickack": false, 00:16:18.253 "enable_placement_id": 0, 00:16:18.253 "enable_zerocopy_send_server": true, 00:16:18.253 "enable_zerocopy_send_client": false, 00:16:18.253 "zerocopy_threshold": 0, 00:16:18.253 "tls_version": 0, 00:16:18.253 "enable_ktls": false 00:16:18.253 } 00:16:18.253 }, 00:16:18.253 { 00:16:18.253 "method": "sock_impl_set_options", 00:16:18.253 "params": { 00:16:18.253 "impl_name": "posix", 00:16:18.253 "recv_buf_size": 2097152, 00:16:18.253 "send_buf_size": 2097152, 00:16:18.253 "enable_recv_pipe": true, 00:16:18.253 "enable_quickack": false, 00:16:18.253 "enable_placement_id": 0, 00:16:18.253 "enable_zerocopy_send_server": true, 00:16:18.253 "enable_zerocopy_send_client": false, 00:16:18.253 "zerocopy_threshold": 0, 00:16:18.253 "tls_version": 0, 00:16:18.253 "enable_ktls": false 00:16:18.253 } 00:16:18.253 } 00:16:18.253 ] 00:16:18.253 }, 00:16:18.253 { 00:16:18.253 "subsystem": "vmd", 00:16:18.253 "config": [] 00:16:18.253 }, 00:16:18.253 { 00:16:18.253 "subsystem": "accel", 00:16:18.254 "config": [ 00:16:18.254 { 00:16:18.254 "method": "accel_set_options", 00:16:18.254 "params": { 00:16:18.254 "small_cache_size": 128, 00:16:18.254 "large_cache_size": 16, 00:16:18.254 "task_count": 2048, 00:16:18.254 "sequence_count": 2048, 00:16:18.254 "buf_count": 2048 00:16:18.254 } 00:16:18.254 } 00:16:18.254 ] 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "subsystem": "bdev", 00:16:18.254 "config": [ 00:16:18.254 { 00:16:18.254 "method": "bdev_set_options", 00:16:18.254 "params": { 00:16:18.254 "bdev_io_pool_size": 65535, 00:16:18.254 "bdev_io_cache_size": 256, 00:16:18.254 "bdev_auto_examine": true, 00:16:18.254 "iobuf_small_cache_size": 128, 00:16:18.254 "iobuf_large_cache_size": 16 00:16:18.254 } 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "method": "bdev_raid_set_options", 00:16:18.254 "params": { 00:16:18.254 "process_window_size_kb": 1024, 00:16:18.254 "process_max_bandwidth_mb_sec": 0 00:16:18.254 } 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "method": "bdev_iscsi_set_options", 00:16:18.254 "params": { 00:16:18.254 "timeout_sec": 30 00:16:18.254 } 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "method": "bdev_nvme_set_options", 00:16:18.254 "params": { 00:16:18.254 "action_on_timeout": "none", 00:16:18.254 "timeout_us": 0, 00:16:18.254 "timeout_admin_us": 0, 00:16:18.254 "keep_alive_timeout_ms": 10000, 00:16:18.254 "arbitration_burst": 0, 00:16:18.254 "low_priority_weight": 0, 00:16:18.254 "medium_priority_weight": 0, 00:16:18.254 "high_priority_weight": 0, 00:16:18.254 "nvme_adminq_poll_period_us": 10000, 00:16:18.254 "nvme_ioq_poll_period_us": 0, 00:16:18.254 "io_queue_requests": 0, 00:16:18.254 "delay_cmd_submit": true, 00:16:18.254 "transport_retry_count": 4, 00:16:18.254 "bdev_retry_count": 3, 00:16:18.254 "transport_ack_timeout": 0, 00:16:18.254 "ctrlr_loss_timeout_sec": 0, 00:16:18.254 "reconnect_delay_sec": 0, 00:16:18.254 "fast_io_fail_timeout_sec": 0, 00:16:18.254 "disable_auto_failback": false, 00:16:18.254 "generate_uuids": false, 00:16:18.254 "transport_tos": 0, 00:16:18.254 "nvme_error_stat": false, 00:16:18.254 "rdma_srq_size": 0, 00:16:18.254 "io_path_stat": false, 00:16:18.254 "allow_accel_sequence": false, 00:16:18.254 "rdma_max_cq_size": 0, 00:16:18.254 "rdma_cm_event_timeout_ms": 0, 00:16:18.254 "dhchap_digests": [ 00:16:18.254 "sha256", 00:16:18.254 "sha384", 00:16:18.254 "sha512" 00:16:18.254 ], 00:16:18.254 "dhchap_dhgroups": [ 00:16:18.254 "null", 00:16:18.254 "ffdhe2048", 00:16:18.254 "ffdhe3072", 00:16:18.254 "ffdhe4096", 00:16:18.254 "ffdhe6144", 00:16:18.254 "ffdhe8192" 00:16:18.254 ] 00:16:18.254 } 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "method": "bdev_nvme_set_hotplug", 00:16:18.254 "params": { 00:16:18.254 "period_us": 100000, 00:16:18.254 "enable": false 00:16:18.254 } 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "method": "bdev_malloc_create", 00:16:18.254 "params": { 00:16:18.254 "name": "malloc0", 00:16:18.254 "num_blocks": 8192, 00:16:18.254 "block_size": 4096, 00:16:18.254 "physical_block_size": 4096, 00:16:18.254 "uuid": "6d31108b-3560-400e-b720-e69a1ca8f4d5", 00:16:18.254 "optimal_io_boundary": 0, 00:16:18.254 "md_size": 0, 00:16:18.254 "dif_type": 0, 00:16:18.254 "dif_is_head_of_md": false, 00:16:18.254 "dif_pi_format": 0 00:16:18.254 } 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "method": "bdev_wait_for_examine" 00:16:18.254 } 00:16:18.254 ] 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "subsystem": "scsi", 00:16:18.254 "config": null 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "subsystem": "scheduler", 00:16:18.254 "config": [ 00:16:18.254 { 00:16:18.254 "method": "framework_set_scheduler", 00:16:18.254 "params": { 00:16:18.254 "name": "static" 00:16:18.254 } 00:16:18.254 } 00:16:18.254 ] 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "subsystem": "vhost_scsi", 00:16:18.254 "config": [] 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "subsystem": "vhost_blk", 00:16:18.254 "config": [] 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "subsystem": "ublk", 00:16:18.254 "config": [ 00:16:18.254 { 00:16:18.254 "method": "ublk_create_target", 00:16:18.254 "params": { 00:16:18.254 "cpumask": "1" 00:16:18.254 } 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "method": "ublk_start_disk", 00:16:18.254 "params": { 00:16:18.254 "bdev_name": "malloc0", 00:16:18.254 "ublk_id": 0, 00:16:18.254 "num_queues": 1, 00:16:18.254 "queue_depth": 128 00:16:18.254 } 00:16:18.254 } 00:16:18.254 ] 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "subsystem": "nbd", 00:16:18.254 "config": [] 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "subsystem": "nvmf", 00:16:18.254 "config": [ 00:16:18.254 { 00:16:18.254 "method": "nvmf_set_config", 00:16:18.254 "params": { 00:16:18.254 "discovery_filter": "match_any", 00:16:18.254 "admin_cmd_passthru": { 00:16:18.254 "identify_ctrlr": false 00:16:18.254 }, 00:16:18.254 "dhchap_digests": [ 00:16:18.254 "sha256", 00:16:18.254 "sha384", 00:16:18.254 "sha512" 00:16:18.254 ], 00:16:18.254 "dhchap_dhgroups": [ 00:16:18.254 "null", 00:16:18.254 "ffdhe2048", 00:16:18.254 "ffdhe3072", 00:16:18.254 "ffdhe4096", 00:16:18.254 "ffdhe6144", 00:16:18.254 "ffdhe8192" 00:16:18.254 ] 00:16:18.254 } 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "method": "nvmf_set_max_subsystems", 00:16:18.254 "params": { 00:16:18.254 "max_subsystems": 1024 00:16:18.254 } 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "method": "nvmf_set_crdt", 00:16:18.254 "params": { 00:16:18.254 "crdt1": 0, 00:16:18.255 "crdt2": 0, 00:16:18.255 "crdt3": 0 00:16:18.255 } 00:16:18.255 } 00:16:18.255 ] 00:16:18.255 }, 00:16:18.255 { 00:16:18.255 "subsystem": "iscsi", 00:16:18.255 "config": [ 00:16:18.255 { 00:16:18.255 "method": "iscsi_set_options", 00:16:18.255 "params": { 00:16:18.255 "node_base": "iqn.2016-06.io.spdk", 00:16:18.255 "max_sessions": 128, 00:16:18.255 "max_connections_per_session": 2, 00:16:18.255 "max_queue_depth": 64, 00:16:18.255 "default_time2wait": 2, 00:16:18.255 "default_time2retain": 20, 00:16:18.255 "first_burst_length": 8192, 00:16:18.255 "immediate_data": true, 00:16:18.255 "allow_duplicated_isid": false, 00:16:18.255 "error_recovery_level": 0, 00:16:18.255 "nop_timeout": 60, 00:16:18.255 "nop_in_interval": 30, 00:16:18.255 "disable_chap": false, 00:16:18.255 "require_chap": false, 00:16:18.255 "mutual_chap": false, 00:16:18.255 "chap_group": 0, 00:16:18.255 "max_large_datain_per_connection": 64, 00:16:18.255 "max_r2t_per_connection": 4, 00:16:18.255 "pdu_pool_size": 36864, 00:16:18.255 "immediate_data_pool_size": 16384, 00:16:18.255 "data_out_pool_size": 2048 00:16:18.255 } 00:16:18.255 } 00:16:18.255 ] 00:16:18.255 } 00:16:18.255 ] 00:16:18.255 }' 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73581 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73581 ']' 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73581 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73581 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73581' 00:16:18.255 killing process with pid 73581 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73581 00:16:18.255 23:53:50 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73581 00:16:19.199 [2024-12-05 23:53:51.864946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.199 [2024-12-05 23:53:51.904111] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.199 [2024-12-05 23:53:51.904258] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.460 [2024-12-05 23:53:51.915011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.460 [2024-12-05 23:53:51.915080] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:19.460 [2024-12-05 23:53:51.915095] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:19.460 [2024-12-05 23:53:51.915125] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:19.460 [2024-12-05 23:53:51.915288] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:20.847 23:53:53 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:20.847 23:53:53 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73637 00:16:20.847 23:53:53 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73637 00:16:20.847 23:53:53 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73637 ']' 00:16:20.847 23:53:53 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.847 23:53:53 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:20.847 "subsystems": [ 00:16:20.847 { 00:16:20.847 "subsystem": "fsdev", 00:16:20.847 "config": [ 00:16:20.847 { 00:16:20.847 "method": "fsdev_set_opts", 00:16:20.847 "params": { 00:16:20.847 "fsdev_io_pool_size": 65535, 00:16:20.847 "fsdev_io_cache_size": 256 00:16:20.847 } 00:16:20.847 } 00:16:20.847 ] 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "subsystem": "keyring", 00:16:20.847 "config": [] 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "subsystem": "iobuf", 00:16:20.847 "config": [ 00:16:20.847 { 00:16:20.847 "method": "iobuf_set_options", 00:16:20.847 "params": { 00:16:20.847 "small_pool_count": 8192, 00:16:20.847 "large_pool_count": 1024, 00:16:20.847 "small_bufsize": 8192, 00:16:20.847 "large_bufsize": 135168, 00:16:20.847 "enable_numa": false 00:16:20.847 } 00:16:20.847 } 00:16:20.847 ] 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "subsystem": "sock", 00:16:20.847 "config": [ 00:16:20.847 { 00:16:20.847 "method": "sock_set_default_impl", 00:16:20.847 "params": { 00:16:20.847 "impl_name": "posix" 00:16:20.847 } 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "method": "sock_impl_set_options", 00:16:20.847 "params": { 00:16:20.847 "impl_name": "ssl", 00:16:20.847 "recv_buf_size": 4096, 00:16:20.847 "send_buf_size": 4096, 00:16:20.847 "enable_recv_pipe": true, 00:16:20.847 "enable_quickack": false, 00:16:20.847 "enable_placement_id": 0, 00:16:20.847 "enable_zerocopy_send_server": true, 00:16:20.847 "enable_zerocopy_send_client": false, 00:16:20.847 "zerocopy_threshold": 0, 00:16:20.847 "tls_version": 0, 00:16:20.847 "enable_ktls": false 00:16:20.847 } 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "method": "sock_impl_set_options", 00:16:20.847 "params": { 00:16:20.847 "impl_name": "posix", 00:16:20.847 "recv_buf_size": 2097152, 00:16:20.847 "send_buf_size": 2097152, 00:16:20.847 "enable_recv_pipe": true, 00:16:20.847 "enable_quickack": false, 00:16:20.847 "enable_placement_id": 0, 00:16:20.847 "enable_zerocopy_send_server": true, 00:16:20.847 "enable_zerocopy_send_client": false, 00:16:20.847 "zerocopy_threshold": 0, 00:16:20.847 "tls_version": 0, 00:16:20.847 "enable_ktls": false 00:16:20.847 } 00:16:20.847 } 00:16:20.847 ] 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "subsystem": "vmd", 00:16:20.847 "config": [] 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "subsystem": "accel", 00:16:20.847 "config": [ 00:16:20.847 { 00:16:20.847 "method": "accel_set_options", 00:16:20.847 "params": { 00:16:20.847 "small_cache_size": 128, 00:16:20.847 "large_cache_size": 16, 00:16:20.847 "task_count": 2048, 00:16:20.847 "sequence_count": 2048, 00:16:20.847 "buf_count": 2048 00:16:20.847 } 00:16:20.847 } 00:16:20.847 ] 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "subsystem": "bdev", 00:16:20.847 "config": [ 00:16:20.847 { 00:16:20.847 "method": "bdev_set_options", 00:16:20.847 "params": { 00:16:20.847 "bdev_io_pool_size": 65535, 00:16:20.847 "bdev_io_cache_size": 256, 00:16:20.847 "bdev_auto_examine": true, 00:16:20.847 "iobuf_small_cache_size": 128, 00:16:20.847 "iobuf_large_cache_size": 16 00:16:20.847 } 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "method": "bdev_raid_set_options", 00:16:20.847 "params": { 00:16:20.847 "process_window_size_kb": 1024, 00:16:20.847 "process_max_bandwidth_mb_sec": 0 00:16:20.847 } 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "method": "bdev_iscsi_set_options", 00:16:20.847 "params": { 00:16:20.847 "timeout_sec": 30 00:16:20.847 } 00:16:20.847 }, 00:16:20.847 { 00:16:20.847 "method": "bdev_nvme_set_options", 00:16:20.847 "params": { 00:16:20.847 "action_on_timeout": "none", 00:16:20.847 "timeout_us": 0, 00:16:20.847 "timeout_admin_us": 0, 00:16:20.847 "keep_alive_timeout_ms": 10000, 00:16:20.847 "arbitration_burst": 0, 00:16:20.847 "low_priority_weight": 0, 00:16:20.847 "medium_priority_weight": 0, 00:16:20.847 "high_priority_weight": 0, 00:16:20.847 "nvme_adminq_poll_period_us": 10000, 00:16:20.847 "nvme_ioq_poll_period_us": 0, 00:16:20.847 "io_queue_requests": 0, 00:16:20.847 "delay_cmd_submit": true, 00:16:20.847 "transport_retry_count": 4, 00:16:20.847 "bdev_retry_count": 3, 00:16:20.847 "transport_ack_timeout": 0, 00:16:20.847 "ctrlr_loss_timeout_sec": 0, 00:16:20.847 "reconnect_delay_sec": 0, 00:16:20.847 "fast_io_fail_timeout_sec": 0, 00:16:20.848 "disable_auto_failback": false, 00:16:20.848 "generate_uuids": false, 00:16:20.848 "transport_tos": 0, 00:16:20.848 "nvme_error_stat": false, 00:16:20.848 "rdma_srq_size": 0, 00:16:20.848 "io_path_stat": false, 00:16:20.848 "allow_accel_sequence": false, 00:16:20.848 "rdma_max_cq_size": 0, 00:16:20.848 "rdma_cm_event_timeout_ms": 0, 00:16:20.848 "dhchap_digests": [ 00:16:20.848 "sha256", 00:16:20.848 "sha384", 00:16:20.848 "sha512" 00:16:20.848 ], 00:16:20.848 "dhchap_dhgroups": [ 00:16:20.848 "null", 00:16:20.848 "ffdhe2048", 00:16:20.848 "ffdhe3072", 00:16:20.848 "ffdhe4096", 00:16:20.848 "ffdhe6144", 00:16:20.848 "ffdhe8192" 00:16:20.848 ] 00:16:20.848 } 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "method": "bdev_nvme_set_hotplug", 00:16:20.848 "params": { 00:16:20.848 "period_us": 100000, 00:16:20.848 "enable": false 00:16:20.848 } 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "method": "bdev_malloc_create", 00:16:20.848 "params": { 00:16:20.848 "name": "malloc0", 00:16:20.848 "num_blocks": 8192, 00:16:20.848 "block_size": 4096, 00:16:20.848 "physical_block_size": 4096, 00:16:20.848 "uuid": "6d31108b-3560-400e-b720-e69a1ca8f4d5", 00:16:20.848 "optimal_io_boundary": 0, 00:16:20.848 "md_size": 0, 00:16:20.848 "dif_type": 0, 00:16:20.848 "dif_is_head_of_md": false, 00:16:20.848 "dif_pi_format": 0 00:16:20.848 } 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "method": "bdev_wait_for_examine" 00:16:20.848 } 00:16:20.848 ] 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "subsystem": "scsi", 00:16:20.848 "config": null 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "subsystem": "scheduler", 00:16:20.848 "config": [ 00:16:20.848 { 00:16:20.848 "method": "framework_set_scheduler", 00:16:20.848 "params": { 00:16:20.848 "name": "static" 00:16:20.848 } 00:16:20.848 } 00:16:20.848 ] 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "subsystem": "vhost_scsi", 00:16:20.848 "config": [] 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "subsystem": "vhost_blk", 00:16:20.848 "config": [] 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "subsystem": "ublk", 00:16:20.848 "config": [ 00:16:20.848 { 00:16:20.848 "method": "ublk_create_target", 00:16:20.848 "params": { 00:16:20.848 "cpumask": "1" 00:16:20.848 } 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "method": "ublk_start_disk", 00:16:20.848 "params": { 00:16:20.848 "bdev_name": "malloc0", 00:16:20.848 "ublk_id": 0, 00:16:20.848 "num_queues": 1, 00:16:20.848 "queue_depth": 128 00:16:20.848 } 00:16:20.848 } 00:16:20.848 ] 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "subsystem": "nbd", 00:16:20.848 "config": [] 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "subsystem": "nvmf", 00:16:20.848 "config": [ 00:16:20.848 { 00:16:20.848 "method": "nvmf_set_config", 00:16:20.848 "params": { 00:16:20.848 "discovery_filter": "match_any", 00:16:20.848 "admin_cmd_passthru": { 00:16:20.848 "identify_ctrlr": false 00:16:20.848 }, 00:16:20.848 "dhchap_digests": [ 00:16:20.848 "sha256", 00:16:20.848 "sha384", 00:16:20.848 "sha512" 00:16:20.848 ], 00:16:20.848 "dhchap_dhgroups": [ 00:16:20.848 "null", 00:16:20.848 "ffdhe2048", 00:16:20.848 "ffdhe3072", 00:16:20.848 "ffdhe4096", 00:16:20.848 "ffdhe6144", 00:16:20.848 "ffdhe8192" 00:16:20.848 ] 00:16:20.848 } 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "method": "nvmf_set_max_subsystems", 00:16:20.848 "params": { 00:16:20.848 "max_subsystems": 1024 00:16:20.848 } 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "method": "nvmf_set_crdt", 00:16:20.848 "params": { 00:16:20.848 "crdt1": 0, 00:16:20.848 "crdt2": 0, 00:16:20.848 "crdt3": 0 00:16:20.848 } 00:16:20.848 } 00:16:20.848 ] 00:16:20.848 }, 00:16:20.848 { 00:16:20.848 "subsystem": "iscsi", 00:16:20.848 "config": [ 00:16:20.848 { 00:16:20.848 "method": "iscsi_set_options", 00:16:20.848 "params": { 00:16:20.848 "node_base": "iqn.2016-06.io.spdk", 00:16:20.848 "max_sessions": 128, 00:16:20.848 "max_connections_per_session": 2, 00:16:20.848 "max_queue_depth": 64, 00:16:20.848 "default_time2wait": 2, 00:16:20.848 "default_time2retain": 20, 00:16:20.848 "first_burst_length": 8192, 00:16:20.848 "immediate_data": true, 00:16:20.848 "allow_duplicated_isid": false, 00:16:20.848 "error_recovery_level": 0, 00:16:20.848 "nop_timeout": 60, 00:16:20.848 "nop_in_interval": 30, 00:16:20.848 "disable_chap": false, 00:16:20.848 "require_chap": false, 00:16:20.848 "mutual_chap": false, 00:16:20.848 "chap_group": 0, 00:16:20.848 "max_large_datain_per_connection": 64, 00:16:20.848 "max_r2t_per_connection": 4, 00:16:20.848 "pdu_pool_size": 36864, 00:16:20.848 "immediate_data_pool_size": 16384, 00:16:20.848 "data_out_pool_size": 2048 00:16:20.848 } 00:16:20.848 } 00:16:20.848 ] 00:16:20.848 } 00:16:20.848 ] 00:16:20.848 }' 00:16:20.848 23:53:53 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.848 23:53:53 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.848 23:53:53 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.848 23:53:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:20.848 [2024-12-05 23:53:53.358202] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:16:20.848 [2024-12-05 23:53:53.358324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73637 ] 00:16:20.848 [2024-12-05 23:53:53.504189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.108 [2024-12-05 23:53:53.588223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.680 [2024-12-05 23:53:54.247988] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:21.680 [2024-12-05 23:53:54.248644] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:21.680 [2024-12-05 23:53:54.256072] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:21.680 [2024-12-05 23:53:54.256132] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:21.680 [2024-12-05 23:53:54.256139] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:21.680 [2024-12-05 23:53:54.256145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:21.680 [2024-12-05 23:53:54.265036] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:21.680 [2024-12-05 23:53:54.265053] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:21.680 [2024-12-05 23:53:54.271987] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:21.680 [2024-12-05 23:53:54.272059] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:21.680 [2024-12-05 23:53:54.287992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73637 00:16:21.680 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73637 ']' 00:16:21.681 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73637 00:16:21.681 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:21.681 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.681 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73637 00:16:21.681 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.681 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.681 killing process with pid 73637 00:16:21.681 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73637' 00:16:21.681 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73637 00:16:21.681 23:53:54 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73637 00:16:23.063 [2024-12-05 23:53:55.390490] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:23.063 [2024-12-05 23:53:55.423998] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:23.063 [2024-12-05 23:53:55.424098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:23.063 [2024-12-05 23:53:55.431989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:23.063 [2024-12-05 23:53:55.432031] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:23.063 [2024-12-05 23:53:55.432037] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:23.063 [2024-12-05 23:53:55.432057] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:23.063 [2024-12-05 23:53:55.432167] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:24.004 23:53:56 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:24.004 00:16:24.004 real 0m7.273s 00:16:24.004 user 0m5.087s 00:16:24.004 sys 0m2.778s 00:16:24.004 23:53:56 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.004 23:53:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:24.004 ************************************ 00:16:24.004 END TEST test_save_ublk_config 00:16:24.004 ************************************ 00:16:24.004 23:53:56 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73706 00:16:24.004 23:53:56 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:24.004 23:53:56 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73706 00:16:24.004 23:53:56 ublk -- common/autotest_common.sh@835 -- # '[' -z 73706 ']' 00:16:24.004 23:53:56 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.004 23:53:56 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.004 23:53:56 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:24.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.004 23:53:56 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.004 23:53:56 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.004 23:53:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.265 [2024-12-05 23:53:56.746391] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:16:24.265 [2024-12-05 23:53:56.746833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73706 ] 00:16:24.265 [2024-12-05 23:53:56.903127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:24.526 [2024-12-05 23:53:56.987528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.526 [2024-12-05 23:53:56.987669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.098 23:53:57 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.098 23:53:57 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:25.098 23:53:57 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:25.098 23:53:57 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:25.098 23:53:57 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.098 23:53:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.098 ************************************ 00:16:25.098 START TEST test_create_ublk 00:16:25.098 ************************************ 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:25.098 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.098 [2024-12-05 23:53:57.583983] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:25.098 [2024-12-05 23:53:57.585539] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.098 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:25.098 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.098 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:25.098 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.098 [2024-12-05 23:53:57.748094] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:25.098 [2024-12-05 23:53:57.748405] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:25.098 [2024-12-05 23:53:57.748421] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:25.098 [2024-12-05 23:53:57.748426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:25.098 [2024-12-05 23:53:57.755998] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:25.098 [2024-12-05 23:53:57.756018] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:25.098 [2024-12-05 23:53:57.763988] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:25.098 [2024-12-05 23:53:57.764507] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:25.098 [2024-12-05 23:53:57.785989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.098 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:25.098 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:25.098 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.098 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.358 23:53:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:25.358 { 00:16:25.358 "ublk_device": "/dev/ublkb0", 00:16:25.358 "id": 0, 00:16:25.358 "queue_depth": 512, 00:16:25.358 "num_queues": 4, 00:16:25.358 "bdev_name": "Malloc0" 00:16:25.358 } 00:16:25.358 ]' 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:25.358 23:53:57 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:25.358 23:53:57 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:25.618 fio: verification read phase will never start because write phase uses all of runtime 00:16:25.618 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:25.618 fio-3.35 00:16:25.618 Starting 1 process 00:16:35.691 00:16:35.691 fio_test: (groupid=0, jobs=1): err= 0: pid=73750: Thu Dec 5 23:54:08 2024 00:16:35.691 write: IOPS=20.1k, BW=78.5MiB/s (82.4MB/s)(786MiB/10001msec); 0 zone resets 00:16:35.691 clat (usec): min=33, max=3985, avg=48.92, stdev=81.44 00:16:35.691 lat (usec): min=34, max=3985, avg=49.38, stdev=81.45 00:16:35.691 clat percentiles (usec): 00:16:35.691 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 00:16:35.691 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:16:35.691 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 54], 95.00th=[ 59], 00:16:35.691 | 99.00th=[ 67], 99.50th=[ 73], 99.90th=[ 1352], 99.95th=[ 2409], 00:16:35.691 | 99.99th=[ 3458] 00:16:35.691 bw ( KiB/s): min=70768, max=84768, per=99.91%, avg=80358.74, stdev=3690.42, samples=19 00:16:35.691 iops : min=17692, max=21192, avg=20089.79, stdev=922.68, samples=19 00:16:35.691 lat (usec) : 50=83.22%, 100=16.53%, 250=0.09%, 500=0.02%, 750=0.01% 00:16:35.691 lat (usec) : 1000=0.01% 00:16:35.691 lat (msec) : 2=0.05%, 4=0.07% 00:16:35.691 cpu : usr=2.96%, sys=15.54%, ctx=201103, majf=0, minf=797 00:16:35.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.691 issued rwts: total=0,201105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.691 00:16:35.691 Run status group 0 (all jobs): 00:16:35.691 WRITE: bw=78.5MiB/s (82.4MB/s), 78.5MiB/s-78.5MiB/s (82.4MB/s-82.4MB/s), io=786MiB (824MB), run=10001-10001msec 00:16:35.691 00:16:35.691 Disk stats (read/write): 00:16:35.691 ublkb0: ios=0/198986, merge=0/0, ticks=0/8180, in_queue=8181, util=99.08% 00:16:35.691 23:54:08 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.691 [2024-12-05 23:54:08.206546] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:35.691 [2024-12-05 23:54:08.246399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:35.691 [2024-12-05 23:54:08.247371] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:35.691 [2024-12-05 23:54:08.253985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:35.691 [2024-12-05 23:54:08.254238] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:35.691 [2024-12-05 23:54:08.254254] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.691 23:54:08 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.691 [2024-12-05 23:54:08.262042] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:35.691 request: 00:16:35.691 { 00:16:35.691 "ublk_id": 0, 00:16:35.691 "method": "ublk_stop_disk", 00:16:35.691 "req_id": 1 00:16:35.691 } 00:16:35.691 Got JSON-RPC error response 00:16:35.691 response: 00:16:35.691 { 00:16:35.691 "code": -19, 00:16:35.691 "message": "No such device" 00:16:35.691 } 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.691 23:54:08 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.691 [2024-12-05 23:54:08.278051] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:35.691 [2024-12-05 23:54:08.281756] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:35.691 [2024-12-05 23:54:08.281787] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:35.691 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.691 23:54:08 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:35.692 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.692 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.949 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.949 23:54:08 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:35.949 23:54:08 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:35.949 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.949 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.210 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.210 23:54:08 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:36.210 23:54:08 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:36.210 23:54:08 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:36.210 23:54:08 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:36.210 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.210 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.210 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.210 23:54:08 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:36.210 23:54:08 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:36.210 23:54:08 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:36.210 00:16:36.210 real 0m11.170s 00:16:36.210 user 0m0.606s 00:16:36.210 sys 0m1.628s 00:16:36.210 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.210 23:54:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.210 ************************************ 00:16:36.210 END TEST test_create_ublk 00:16:36.210 ************************************ 00:16:36.210 23:54:08 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:36.210 23:54:08 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:36.210 23:54:08 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.210 23:54:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.210 ************************************ 00:16:36.210 START TEST test_create_multi_ublk 00:16:36.210 ************************************ 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.210 [2024-12-05 23:54:08.796974] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:36.210 [2024-12-05 23:54:08.798494] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.210 23:54:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.470 [2024-12-05 23:54:09.013107] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:36.470 [2024-12-05 23:54:09.013407] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:36.470 [2024-12-05 23:54:09.013416] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:36.470 [2024-12-05 23:54:09.013424] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:36.470 [2024-12-05 23:54:09.025029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:36.470 [2024-12-05 23:54:09.025050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:36.470 [2024-12-05 23:54:09.036992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:36.470 [2024-12-05 23:54:09.037494] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:36.470 [2024-12-05 23:54:09.045127] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.470 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.731 [2024-12-05 23:54:09.279079] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:36.731 [2024-12-05 23:54:09.279382] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:36.731 [2024-12-05 23:54:09.279396] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:36.731 [2024-12-05 23:54:09.279401] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:36.731 [2024-12-05 23:54:09.287016] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:36.731 [2024-12-05 23:54:09.287034] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:36.731 [2024-12-05 23:54:09.294987] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:36.731 [2024-12-05 23:54:09.295491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:36.731 [2024-12-05 23:54:09.318991] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.731 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.991 [2024-12-05 23:54:09.479075] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:36.991 [2024-12-05 23:54:09.479376] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:36.991 [2024-12-05 23:54:09.479388] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:36.991 [2024-12-05 23:54:09.479394] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:36.991 [2024-12-05 23:54:09.487006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:36.991 [2024-12-05 23:54:09.487028] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:36.991 [2024-12-05 23:54:09.494990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:36.991 [2024-12-05 23:54:09.495489] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:36.991 [2024-12-05 23:54:09.499797] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.991 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.992 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:36.992 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:36.992 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.992 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.992 [2024-12-05 23:54:09.667092] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:36.992 [2024-12-05 23:54:09.667389] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:36.992 [2024-12-05 23:54:09.667403] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:36.992 [2024-12-05 23:54:09.667408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:36.992 [2024-12-05 23:54:09.675012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:36.992 [2024-12-05 23:54:09.675030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:36.992 [2024-12-05 23:54:09.682992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:36.992 [2024-12-05 23:54:09.683496] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:36.992 [2024-12-05 23:54:09.692010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:37.253 { 00:16:37.253 "ublk_device": "/dev/ublkb0", 00:16:37.253 "id": 0, 00:16:37.253 "queue_depth": 512, 00:16:37.253 "num_queues": 4, 00:16:37.253 "bdev_name": "Malloc0" 00:16:37.253 }, 00:16:37.253 { 00:16:37.253 "ublk_device": "/dev/ublkb1", 00:16:37.253 "id": 1, 00:16:37.253 "queue_depth": 512, 00:16:37.253 "num_queues": 4, 00:16:37.253 "bdev_name": "Malloc1" 00:16:37.253 }, 00:16:37.253 { 00:16:37.253 "ublk_device": "/dev/ublkb2", 00:16:37.253 "id": 2, 00:16:37.253 "queue_depth": 512, 00:16:37.253 "num_queues": 4, 00:16:37.253 "bdev_name": "Malloc2" 00:16:37.253 }, 00:16:37.253 { 00:16:37.253 "ublk_device": "/dev/ublkb3", 00:16:37.253 "id": 3, 00:16:37.253 "queue_depth": 512, 00:16:37.253 "num_queues": 4, 00:16:37.253 "bdev_name": "Malloc3" 00:16:37.253 } 00:16:37.253 ]' 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:37.253 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:37.514 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:37.514 23:54:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.514 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.775 [2024-12-05 23:54:10.379073] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.775 [2024-12-05 23:54:10.421457] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.775 [2024-12-05 23:54:10.422466] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.775 [2024-12-05 23:54:10.426992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.775 [2024-12-05 23:54:10.427224] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:37.775 [2024-12-05 23:54:10.427237] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.775 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.775 [2024-12-05 23:54:10.443070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:38.036 [2024-12-05 23:54:10.491023] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:38.036 [2024-12-05 23:54:10.491670] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:38.036 [2024-12-05 23:54:10.498993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:38.036 [2024-12-05 23:54:10.499223] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:38.036 [2024-12-05 23:54:10.499236] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:38.036 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.036 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.036 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:38.036 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.036 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.036 [2024-12-05 23:54:10.515056] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:38.036 [2024-12-05 23:54:10.549458] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:38.037 [2024-12-05 23:54:10.550419] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:38.037 [2024-12-05 23:54:10.554996] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:38.037 [2024-12-05 23:54:10.555224] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:38.037 [2024-12-05 23:54:10.555236] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:38.037 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.037 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.037 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:38.037 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.037 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.037 [2024-12-05 23:54:10.570047] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:38.037 [2024-12-05 23:54:10.611438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:38.037 [2024-12-05 23:54:10.612352] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:38.037 [2024-12-05 23:54:10.618991] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:38.037 [2024-12-05 23:54:10.619204] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:38.037 [2024-12-05 23:54:10.619216] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:38.037 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.037 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:38.298 [2024-12-05 23:54:10.811043] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:38.298 [2024-12-05 23:54:10.814657] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:38.298 [2024-12-05 23:54:10.814687] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:38.298 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:38.298 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.298 23:54:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:38.298 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.298 23:54:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.559 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.559 23:54:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.559 23:54:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:38.559 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.559 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.129 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.129 23:54:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.129 23:54:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:39.129 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.129 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.129 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.129 23:54:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.129 23:54:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:39.129 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.130 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.388 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.389 23:54:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.389 23:54:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.389 23:54:12 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:39.389 23:54:12 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:39.389 23:54:12 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:39.389 00:16:39.389 real 0m3.252s 00:16:39.389 user 0m0.839s 00:16:39.389 sys 0m0.145s 00:16:39.389 23:54:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.389 23:54:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.389 ************************************ 00:16:39.389 END TEST test_create_multi_ublk 00:16:39.389 ************************************ 00:16:39.389 23:54:12 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:39.389 23:54:12 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:39.389 23:54:12 ublk -- ublk/ublk.sh@130 -- # killprocess 73706 00:16:39.389 23:54:12 ublk -- common/autotest_common.sh@954 -- # '[' -z 73706 ']' 00:16:39.389 23:54:12 ublk -- common/autotest_common.sh@958 -- # kill -0 73706 00:16:39.389 23:54:12 ublk -- common/autotest_common.sh@959 -- # uname 00:16:39.389 23:54:12 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.389 23:54:12 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73706 00:16:39.389 23:54:12 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.389 23:54:12 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.389 killing process with pid 73706 00:16:39.389 23:54:12 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73706' 00:16:39.389 23:54:12 ublk -- common/autotest_common.sh@973 -- # kill 73706 00:16:39.389 23:54:12 ublk -- common/autotest_common.sh@978 -- # wait 73706 00:16:39.955 [2024-12-05 23:54:12.641567] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:39.955 [2024-12-05 23:54:12.641620] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:40.891 00:16:40.891 real 0m24.137s 00:16:40.891 user 0m34.815s 00:16:40.891 sys 0m9.488s 00:16:40.891 23:54:13 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.891 23:54:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.891 ************************************ 00:16:40.891 END TEST ublk 00:16:40.891 ************************************ 00:16:40.891 23:54:13 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:40.891 23:54:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:40.891 23:54:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.891 23:54:13 -- common/autotest_common.sh@10 -- # set +x 00:16:40.891 ************************************ 00:16:40.891 START TEST ublk_recovery 00:16:40.891 ************************************ 00:16:40.891 23:54:13 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:40.891 * Looking for test storage... 00:16:40.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:40.891 23:54:13 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:40.891 23:54:13 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:40.891 23:54:13 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:40.891 23:54:13 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.891 23:54:13 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:40.892 23:54:13 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.892 23:54:13 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.892 23:54:13 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.892 23:54:13 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:40.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.892 --rc genhtml_branch_coverage=1 00:16:40.892 --rc genhtml_function_coverage=1 00:16:40.892 --rc genhtml_legend=1 00:16:40.892 --rc geninfo_all_blocks=1 00:16:40.892 --rc geninfo_unexecuted_blocks=1 00:16:40.892 00:16:40.892 ' 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:40.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.892 --rc genhtml_branch_coverage=1 00:16:40.892 --rc genhtml_function_coverage=1 00:16:40.892 --rc genhtml_legend=1 00:16:40.892 --rc geninfo_all_blocks=1 00:16:40.892 --rc geninfo_unexecuted_blocks=1 00:16:40.892 00:16:40.892 ' 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:40.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.892 --rc genhtml_branch_coverage=1 00:16:40.892 --rc genhtml_function_coverage=1 00:16:40.892 --rc genhtml_legend=1 00:16:40.892 --rc geninfo_all_blocks=1 00:16:40.892 --rc geninfo_unexecuted_blocks=1 00:16:40.892 00:16:40.892 ' 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:40.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.892 --rc genhtml_branch_coverage=1 00:16:40.892 --rc genhtml_function_coverage=1 00:16:40.892 --rc genhtml_legend=1 00:16:40.892 --rc geninfo_all_blocks=1 00:16:40.892 --rc geninfo_unexecuted_blocks=1 00:16:40.892 00:16:40.892 ' 00:16:40.892 23:54:13 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:40.892 23:54:13 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:40.892 23:54:13 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:40.892 23:54:13 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:40.892 23:54:13 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:40.892 23:54:13 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:40.892 23:54:13 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:40.892 23:54:13 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:40.892 23:54:13 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:40.892 23:54:13 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:40.892 23:54:13 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74094 00:16:40.892 23:54:13 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:40.892 23:54:13 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74094 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74094 ']' 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.892 23:54:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.892 23:54:13 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:40.892 [2024-12-05 23:54:13.595272] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:16:40.892 [2024-12-05 23:54:13.595393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74094 ] 00:16:41.153 [2024-12-05 23:54:13.754791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:41.413 [2024-12-05 23:54:13.860819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.413 [2024-12-05 23:54:13.860842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:41.978 23:54:14 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.978 [2024-12-05 23:54:14.472986] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:41.978 [2024-12-05 23:54:14.474849] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.978 23:54:14 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.978 malloc0 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.978 23:54:14 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.978 [2024-12-05 23:54:14.575111] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:41.978 [2024-12-05 23:54:14.575206] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:41.978 [2024-12-05 23:54:14.575217] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:41.978 [2024-12-05 23:54:14.575224] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:41.978 [2024-12-05 23:54:14.586073] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:41.978 [2024-12-05 23:54:14.586094] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:41.978 [2024-12-05 23:54:14.592997] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:41.978 [2024-12-05 23:54:14.593132] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:41.978 [2024-12-05 23:54:14.609998] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:41.978 1 00:16:41.978 23:54:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.978 23:54:14 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:42.918 23:54:15 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74134 00:16:42.918 23:54:15 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:42.918 23:54:15 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:43.179 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:43.179 fio-3.35 00:16:43.179 Starting 1 process 00:16:48.511 23:54:20 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74094 00:16:48.511 23:54:20 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:53.787 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74094 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:53.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.787 23:54:25 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74239 00:16:53.787 23:54:25 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.787 23:54:25 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74239 00:16:53.787 23:54:25 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74239 ']' 00:16:53.788 23:54:25 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.788 23:54:25 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.788 23:54:25 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.788 23:54:25 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.788 23:54:25 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:53.788 23:54:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:53.788 [2024-12-05 23:54:25.718520] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:16:53.788 [2024-12-05 23:54:25.718648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74239 ] 00:16:53.788 [2024-12-05 23:54:25.879209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:53.788 [2024-12-05 23:54:25.977040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.788 [2024-12-05 23:54:25.977051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:54.046 23:54:26 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.046 [2024-12-05 23:54:26.575986] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:54.046 [2024-12-05 23:54:26.577818] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.046 23:54:26 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.046 malloc0 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.046 23:54:26 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.046 [2024-12-05 23:54:26.680106] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:54.046 [2024-12-05 23:54:26.680141] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:54.046 [2024-12-05 23:54:26.680152] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:54.046 [2024-12-05 23:54:26.688015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:54.046 [2024-12-05 23:54:26.688037] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:54.046 1 00:16:54.046 23:54:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.046 23:54:26 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74134 00:16:54.988 [2024-12-05 23:54:27.688081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:54.988 [2024-12-05 23:54:27.694994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:54.988 [2024-12-05 23:54:27.695024] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:56.376 [2024-12-05 23:54:28.696007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:56.376 [2024-12-05 23:54:28.703992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:56.376 [2024-12-05 23:54:28.704023] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:57.313 [2024-12-05 23:54:29.704052] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:57.313 [2024-12-05 23:54:29.711999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:57.313 [2024-12-05 23:54:29.712016] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:57.313 [2024-12-05 23:54:29.712026] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:57.313 [2024-12-05 23:54:29.712112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:19.246 [2024-12-05 23:54:51.136992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:19.246 [2024-12-05 23:54:51.142502] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:19.246 [2024-12-05 23:54:51.149162] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:19.246 [2024-12-05 23:54:51.149181] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:45.824 00:17:45.824 fio_test: (groupid=0, jobs=1): err= 0: pid=74138: Thu Dec 5 23:55:15 2024 00:17:45.824 read: IOPS=13.9k, BW=54.4MiB/s (57.0MB/s)(3263MiB/60003msec) 00:17:45.824 slat (nsec): min=932, max=427794, avg=5321.15, stdev=2357.63 00:17:45.824 clat (usec): min=993, max=30534k, avg=4820.18, stdev=281475.33 00:17:45.824 lat (usec): min=998, max=30534k, avg=4825.50, stdev=281475.32 00:17:45.824 clat percentiles (usec): 00:17:45.824 | 1.00th=[ 1729], 5.00th=[ 1827], 10.00th=[ 1876], 20.00th=[ 1958], 00:17:45.824 | 30.00th=[ 2024], 40.00th=[ 2057], 50.00th=[ 2089], 60.00th=[ 2114], 00:17:45.824 | 70.00th=[ 2147], 80.00th=[ 2212], 90.00th=[ 2573], 95.00th=[ 3359], 00:17:45.824 | 99.00th=[ 5080], 99.50th=[ 5407], 99.90th=[ 6783], 99.95th=[ 7767], 00:17:45.824 | 99.99th=[12911] 00:17:45.824 bw ( KiB/s): min=25696, max=129176, per=100.00%, avg=111523.51, stdev=15696.61, samples=59 00:17:45.824 iops : min= 6424, max=32294, avg=27880.86, stdev=3924.16, samples=59 00:17:45.824 write: IOPS=13.9k, BW=54.3MiB/s (56.9MB/s)(3259MiB/60003msec); 0 zone resets 00:17:45.824 slat (nsec): min=932, max=386697, avg=5518.50, stdev=2346.72 00:17:45.824 clat (usec): min=805, max=30534k, avg=4368.29, stdev=252361.46 00:17:45.824 lat (usec): min=808, max=30534k, avg=4373.81, stdev=252361.45 00:17:45.824 clat percentiles (usec): 00:17:45.824 | 1.00th=[ 1762], 5.00th=[ 1893], 10.00th=[ 1942], 20.00th=[ 2024], 00:17:45.824 | 30.00th=[ 2073], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2180], 00:17:45.824 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2606], 95.00th=[ 3326], 00:17:45.824 | 99.00th=[ 5080], 99.50th=[ 5473], 99.90th=[ 6783], 99.95th=[ 7963], 00:17:45.824 | 99.99th=[12911] 00:17:45.824 bw ( KiB/s): min=25944, max=129520, per=100.00%, avg=111391.47, stdev=15608.12, samples=59 00:17:45.824 iops : min= 6486, max=32380, avg=27847.86, stdev=3902.03, samples=59 00:17:45.824 lat (usec) : 1000=0.01% 00:17:45.824 lat (msec) : 2=22.06%, 4=74.96%, 10=2.94%, 20=0.03%, >=2000=0.01% 00:17:45.824 cpu : usr=3.50%, sys=15.70%, ctx=54693, majf=0, minf=14 00:17:45.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:45.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:45.824 issued rwts: total=835266,834227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:45.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:45.824 00:17:45.824 Run status group 0 (all jobs): 00:17:45.824 READ: bw=54.4MiB/s (57.0MB/s), 54.4MiB/s-54.4MiB/s (57.0MB/s-57.0MB/s), io=3263MiB (3421MB), run=60003-60003msec 00:17:45.824 WRITE: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=3259MiB (3417MB), run=60003-60003msec 00:17:45.824 00:17:45.824 Disk stats (read/write): 00:17:45.824 ublkb1: ios=832177/831084, merge=0/0, ticks=3929564/3483562, in_queue=7413126, util=99.92% 00:17:45.824 23:55:15 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.824 [2024-12-05 23:55:15.873852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:45.824 [2024-12-05 23:55:15.915000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:45.824 [2024-12-05 23:55:15.915133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:45.824 [2024-12-05 23:55:15.922988] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:45.824 [2024-12-05 23:55:15.923077] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:45.824 [2024-12-05 23:55:15.923084] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.824 23:55:15 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.824 [2024-12-05 23:55:15.939061] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:45.824 [2024-12-05 23:55:15.942779] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:45.824 [2024-12-05 23:55:15.942812] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.824 23:55:15 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:45.824 23:55:15 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:45.824 23:55:15 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74239 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74239 ']' 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74239 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74239 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.824 killing process with pid 74239 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74239' 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74239 00:17:45.824 23:55:15 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74239 00:17:45.824 [2024-12-05 23:55:17.023237] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:45.824 [2024-12-05 23:55:17.023288] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:45.824 00:17:45.824 real 1m4.371s 00:17:45.824 user 1m46.128s 00:17:45.824 sys 0m23.468s 00:17:45.825 23:55:17 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.825 23:55:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.825 ************************************ 00:17:45.825 END TEST ublk_recovery 00:17:45.825 ************************************ 00:17:45.825 23:55:17 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:45.825 23:55:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:45.825 23:55:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.825 23:55:17 -- common/autotest_common.sh@10 -- # set +x 00:17:45.825 23:55:17 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:45.825 23:55:17 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:45.825 23:55:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:45.825 23:55:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.825 23:55:17 -- common/autotest_common.sh@10 -- # set +x 00:17:45.825 ************************************ 00:17:45.825 START TEST ftl 00:17:45.825 ************************************ 00:17:45.825 23:55:17 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:45.825 * Looking for test storage... 00:17:45.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.825 23:55:17 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:45.825 23:55:17 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:45.825 23:55:17 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:45.825 23:55:17 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:45.825 23:55:17 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.825 23:55:17 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.825 23:55:17 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.825 23:55:17 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.825 23:55:17 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.825 23:55:17 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.825 23:55:17 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.825 23:55:17 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.825 23:55:17 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.825 23:55:17 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.825 23:55:17 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.825 23:55:17 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:45.825 23:55:17 ftl -- scripts/common.sh@345 -- # : 1 00:17:45.825 23:55:17 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.825 23:55:17 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.825 23:55:17 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:45.825 23:55:17 ftl -- scripts/common.sh@353 -- # local d=1 00:17:45.825 23:55:17 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.825 23:55:17 ftl -- scripts/common.sh@355 -- # echo 1 00:17:45.825 23:55:17 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.825 23:55:17 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:45.825 23:55:17 ftl -- scripts/common.sh@353 -- # local d=2 00:17:45.825 23:55:17 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.825 23:55:17 ftl -- scripts/common.sh@355 -- # echo 2 00:17:45.825 23:55:17 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.825 23:55:17 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.825 23:55:17 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.825 23:55:17 ftl -- scripts/common.sh@368 -- # return 0 00:17:45.825 23:55:17 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.825 23:55:17 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.825 --rc genhtml_branch_coverage=1 00:17:45.825 --rc genhtml_function_coverage=1 00:17:45.825 --rc genhtml_legend=1 00:17:45.825 --rc geninfo_all_blocks=1 00:17:45.825 --rc geninfo_unexecuted_blocks=1 00:17:45.825 00:17:45.825 ' 00:17:45.825 23:55:17 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.825 --rc genhtml_branch_coverage=1 00:17:45.825 --rc genhtml_function_coverage=1 00:17:45.825 --rc genhtml_legend=1 00:17:45.825 --rc geninfo_all_blocks=1 00:17:45.825 --rc geninfo_unexecuted_blocks=1 00:17:45.825 00:17:45.825 ' 00:17:45.825 23:55:17 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.825 --rc genhtml_branch_coverage=1 00:17:45.825 --rc genhtml_function_coverage=1 00:17:45.825 --rc genhtml_legend=1 00:17:45.825 --rc geninfo_all_blocks=1 00:17:45.825 --rc geninfo_unexecuted_blocks=1 00:17:45.825 00:17:45.825 ' 00:17:45.825 23:55:17 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.825 --rc genhtml_branch_coverage=1 00:17:45.825 --rc genhtml_function_coverage=1 00:17:45.825 --rc genhtml_legend=1 00:17:45.825 --rc geninfo_all_blocks=1 00:17:45.825 --rc geninfo_unexecuted_blocks=1 00:17:45.825 00:17:45.825 ' 00:17:45.825 23:55:17 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:45.825 23:55:17 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:45.825 23:55:17 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.825 23:55:17 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.825 23:55:17 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:45.825 23:55:17 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:45.825 23:55:17 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.825 23:55:17 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:45.825 23:55:17 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:45.825 23:55:17 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.825 23:55:17 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.825 23:55:17 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:45.825 23:55:17 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:45.825 23:55:17 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.825 23:55:17 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.825 23:55:17 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:45.825 23:55:17 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:45.825 23:55:17 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.825 23:55:17 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.825 23:55:17 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:45.825 23:55:17 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:45.825 23:55:17 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.825 23:55:17 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.825 23:55:17 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.825 23:55:17 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.825 23:55:17 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:45.825 23:55:17 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:45.825 23:55:17 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.825 23:55:17 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.825 23:55:17 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.825 23:55:17 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:45.825 23:55:17 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:45.825 23:55:17 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:45.825 23:55:17 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:45.825 23:55:17 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:45.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:45.825 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:45.825 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:45.825 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:45.825 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:45.825 23:55:18 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:45.825 23:55:18 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75040 00:17:45.825 23:55:18 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75040 00:17:45.825 23:55:18 ftl -- common/autotest_common.sh@835 -- # '[' -z 75040 ']' 00:17:45.825 23:55:18 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.825 23:55:18 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.825 23:55:18 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.825 23:55:18 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.825 23:55:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:46.083 [2024-12-05 23:55:18.545408] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:17:46.083 [2024-12-05 23:55:18.545546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75040 ] 00:17:46.083 [2024-12-05 23:55:18.706430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.341 [2024-12-05 23:55:18.794529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.911 23:55:19 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.911 23:55:19 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:46.911 23:55:19 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:46.911 23:55:19 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:47.854 23:55:20 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:47.854 23:55:20 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:48.116 23:55:20 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:48.116 23:55:20 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:48.116 23:55:20 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:48.377 23:55:20 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:48.377 23:55:20 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:48.377 23:55:20 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:48.377 23:55:20 ftl -- ftl/ftl.sh@50 -- # break 00:17:48.377 23:55:20 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:48.377 23:55:20 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:48.377 23:55:20 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:48.377 23:55:20 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:48.638 23:55:21 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:48.638 23:55:21 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:48.638 23:55:21 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:48.638 23:55:21 ftl -- ftl/ftl.sh@63 -- # break 00:17:48.638 23:55:21 ftl -- ftl/ftl.sh@66 -- # killprocess 75040 00:17:48.639 23:55:21 ftl -- common/autotest_common.sh@954 -- # '[' -z 75040 ']' 00:17:48.639 23:55:21 ftl -- common/autotest_common.sh@958 -- # kill -0 75040 00:17:48.639 23:55:21 ftl -- common/autotest_common.sh@959 -- # uname 00:17:48.639 23:55:21 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.639 23:55:21 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75040 00:17:48.639 23:55:21 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.639 killing process with pid 75040 00:17:48.639 23:55:21 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.639 23:55:21 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75040' 00:17:48.639 23:55:21 ftl -- common/autotest_common.sh@973 -- # kill 75040 00:17:48.639 23:55:21 ftl -- common/autotest_common.sh@978 -- # wait 75040 00:17:50.057 23:55:22 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:50.057 23:55:22 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:50.057 23:55:22 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:50.057 23:55:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.057 23:55:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:50.057 ************************************ 00:17:50.057 START TEST ftl_fio_basic 00:17:50.057 ************************************ 00:17:50.057 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:50.057 * Looking for test storage... 00:17:50.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.057 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:50.057 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:50.057 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:50.057 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:50.057 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:50.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.058 --rc genhtml_branch_coverage=1 00:17:50.058 --rc genhtml_function_coverage=1 00:17:50.058 --rc genhtml_legend=1 00:17:50.058 --rc geninfo_all_blocks=1 00:17:50.058 --rc geninfo_unexecuted_blocks=1 00:17:50.058 00:17:50.058 ' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:50.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.058 --rc genhtml_branch_coverage=1 00:17:50.058 --rc genhtml_function_coverage=1 00:17:50.058 --rc genhtml_legend=1 00:17:50.058 --rc geninfo_all_blocks=1 00:17:50.058 --rc geninfo_unexecuted_blocks=1 00:17:50.058 00:17:50.058 ' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:50.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.058 --rc genhtml_branch_coverage=1 00:17:50.058 --rc genhtml_function_coverage=1 00:17:50.058 --rc genhtml_legend=1 00:17:50.058 --rc geninfo_all_blocks=1 00:17:50.058 --rc geninfo_unexecuted_blocks=1 00:17:50.058 00:17:50.058 ' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:50.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.058 --rc genhtml_branch_coverage=1 00:17:50.058 --rc genhtml_function_coverage=1 00:17:50.058 --rc genhtml_legend=1 00:17:50.058 --rc geninfo_all_blocks=1 00:17:50.058 --rc geninfo_unexecuted_blocks=1 00:17:50.058 00:17:50.058 ' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75172 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75172 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75172 ']' 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.058 23:55:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:50.058 [2024-12-05 23:55:22.693425] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:17:50.058 [2024-12-05 23:55:22.693544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75172 ] 00:17:50.320 [2024-12-05 23:55:22.850076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:50.320 [2024-12-05 23:55:22.934702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.320 [2024-12-05 23:55:22.934920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.320 [2024-12-05 23:55:22.934940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.891 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.891 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:50.891 23:55:23 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:50.891 23:55:23 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:50.891 23:55:23 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:50.891 23:55:23 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:50.891 23:55:23 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:50.891 23:55:23 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:51.152 23:55:23 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:51.152 23:55:23 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:51.152 23:55:23 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:51.152 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:51.152 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:51.152 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:51.152 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:51.152 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:51.413 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:51.413 { 00:17:51.413 "name": "nvme0n1", 00:17:51.413 "aliases": [ 00:17:51.413 "91f75468-500a-4397-be49-e13f7a485b49" 00:17:51.413 ], 00:17:51.413 "product_name": "NVMe disk", 00:17:51.413 "block_size": 4096, 00:17:51.413 "num_blocks": 1310720, 00:17:51.413 "uuid": "91f75468-500a-4397-be49-e13f7a485b49", 00:17:51.413 "numa_id": -1, 00:17:51.413 "assigned_rate_limits": { 00:17:51.413 "rw_ios_per_sec": 0, 00:17:51.413 "rw_mbytes_per_sec": 0, 00:17:51.413 "r_mbytes_per_sec": 0, 00:17:51.413 "w_mbytes_per_sec": 0 00:17:51.413 }, 00:17:51.413 "claimed": false, 00:17:51.413 "zoned": false, 00:17:51.413 "supported_io_types": { 00:17:51.413 "read": true, 00:17:51.413 "write": true, 00:17:51.413 "unmap": true, 00:17:51.413 "flush": true, 00:17:51.413 "reset": true, 00:17:51.413 "nvme_admin": true, 00:17:51.413 "nvme_io": true, 00:17:51.413 "nvme_io_md": false, 00:17:51.413 "write_zeroes": true, 00:17:51.413 "zcopy": false, 00:17:51.413 "get_zone_info": false, 00:17:51.413 "zone_management": false, 00:17:51.413 "zone_append": false, 00:17:51.413 "compare": true, 00:17:51.413 "compare_and_write": false, 00:17:51.413 "abort": true, 00:17:51.413 "seek_hole": false, 00:17:51.414 "seek_data": false, 00:17:51.414 "copy": true, 00:17:51.414 "nvme_iov_md": false 00:17:51.414 }, 00:17:51.414 "driver_specific": { 00:17:51.414 "nvme": [ 00:17:51.414 { 00:17:51.414 "pci_address": "0000:00:11.0", 00:17:51.414 "trid": { 00:17:51.414 "trtype": "PCIe", 00:17:51.414 "traddr": "0000:00:11.0" 00:17:51.414 }, 00:17:51.414 "ctrlr_data": { 00:17:51.414 "cntlid": 0, 00:17:51.414 "vendor_id": "0x1b36", 00:17:51.414 "model_number": "QEMU NVMe Ctrl", 00:17:51.414 "serial_number": "12341", 00:17:51.414 "firmware_revision": "8.0.0", 00:17:51.414 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:51.414 "oacs": { 00:17:51.414 "security": 0, 00:17:51.414 "format": 1, 00:17:51.414 "firmware": 0, 00:17:51.414 "ns_manage": 1 00:17:51.414 }, 00:17:51.414 "multi_ctrlr": false, 00:17:51.414 "ana_reporting": false 00:17:51.414 }, 00:17:51.414 "vs": { 00:17:51.414 "nvme_version": "1.4" 00:17:51.414 }, 00:17:51.414 "ns_data": { 00:17:51.414 "id": 1, 00:17:51.414 "can_share": false 00:17:51.414 } 00:17:51.414 } 00:17:51.414 ], 00:17:51.414 "mp_policy": "active_passive" 00:17:51.414 } 00:17:51.414 } 00:17:51.414 ]' 00:17:51.414 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:51.414 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:51.414 23:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:51.414 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:51.414 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:51.414 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:51.414 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:51.414 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:51.414 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:51.414 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:51.414 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:51.675 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:51.675 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:51.675 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=c861a0d9-3818-45c7-928d-09802fac03b8 00:17:51.675 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c861a0d9-3818-45c7-928d-09802fac03b8 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4c21579c-7c94-4b59-971c-f83e31d07053 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4c21579c-7c94-4b59-971c-f83e31d07053 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4c21579c-7c94-4b59-971c-f83e31d07053 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4c21579c-7c94-4b59-971c-f83e31d07053 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4c21579c-7c94-4b59-971c-f83e31d07053 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:51.935 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c21579c-7c94-4b59-971c-f83e31d07053 00:17:52.195 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:52.195 { 00:17:52.195 "name": "4c21579c-7c94-4b59-971c-f83e31d07053", 00:17:52.195 "aliases": [ 00:17:52.195 "lvs/nvme0n1p0" 00:17:52.195 ], 00:17:52.195 "product_name": "Logical Volume", 00:17:52.195 "block_size": 4096, 00:17:52.195 "num_blocks": 26476544, 00:17:52.195 "uuid": "4c21579c-7c94-4b59-971c-f83e31d07053", 00:17:52.195 "assigned_rate_limits": { 00:17:52.195 "rw_ios_per_sec": 0, 00:17:52.195 "rw_mbytes_per_sec": 0, 00:17:52.195 "r_mbytes_per_sec": 0, 00:17:52.195 "w_mbytes_per_sec": 0 00:17:52.195 }, 00:17:52.195 "claimed": false, 00:17:52.195 "zoned": false, 00:17:52.195 "supported_io_types": { 00:17:52.195 "read": true, 00:17:52.195 "write": true, 00:17:52.195 "unmap": true, 00:17:52.195 "flush": false, 00:17:52.195 "reset": true, 00:17:52.196 "nvme_admin": false, 00:17:52.196 "nvme_io": false, 00:17:52.196 "nvme_io_md": false, 00:17:52.196 "write_zeroes": true, 00:17:52.196 "zcopy": false, 00:17:52.196 "get_zone_info": false, 00:17:52.196 "zone_management": false, 00:17:52.196 "zone_append": false, 00:17:52.196 "compare": false, 00:17:52.196 "compare_and_write": false, 00:17:52.196 "abort": false, 00:17:52.196 "seek_hole": true, 00:17:52.196 "seek_data": true, 00:17:52.196 "copy": false, 00:17:52.196 "nvme_iov_md": false 00:17:52.196 }, 00:17:52.196 "driver_specific": { 00:17:52.196 "lvol": { 00:17:52.196 "lvol_store_uuid": "c861a0d9-3818-45c7-928d-09802fac03b8", 00:17:52.196 "base_bdev": "nvme0n1", 00:17:52.196 "thin_provision": true, 00:17:52.196 "num_allocated_clusters": 0, 00:17:52.196 "snapshot": false, 00:17:52.196 "clone": false, 00:17:52.196 "esnap_clone": false 00:17:52.196 } 00:17:52.196 } 00:17:52.196 } 00:17:52.196 ]' 00:17:52.196 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:52.196 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:52.196 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:52.196 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:52.196 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:52.196 23:55:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:52.196 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:52.196 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:52.196 23:55:24 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:52.456 23:55:25 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:52.456 23:55:25 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:52.456 23:55:25 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4c21579c-7c94-4b59-971c-f83e31d07053 00:17:52.456 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4c21579c-7c94-4b59-971c-f83e31d07053 00:17:52.456 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:52.456 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:52.456 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:52.456 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c21579c-7c94-4b59-971c-f83e31d07053 00:17:52.717 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:52.717 { 00:17:52.717 "name": "4c21579c-7c94-4b59-971c-f83e31d07053", 00:17:52.717 "aliases": [ 00:17:52.717 "lvs/nvme0n1p0" 00:17:52.717 ], 00:17:52.717 "product_name": "Logical Volume", 00:17:52.717 "block_size": 4096, 00:17:52.717 "num_blocks": 26476544, 00:17:52.717 "uuid": "4c21579c-7c94-4b59-971c-f83e31d07053", 00:17:52.717 "assigned_rate_limits": { 00:17:52.717 "rw_ios_per_sec": 0, 00:17:52.717 "rw_mbytes_per_sec": 0, 00:17:52.717 "r_mbytes_per_sec": 0, 00:17:52.717 "w_mbytes_per_sec": 0 00:17:52.717 }, 00:17:52.717 "claimed": false, 00:17:52.717 "zoned": false, 00:17:52.717 "supported_io_types": { 00:17:52.717 "read": true, 00:17:52.717 "write": true, 00:17:52.717 "unmap": true, 00:17:52.717 "flush": false, 00:17:52.717 "reset": true, 00:17:52.717 "nvme_admin": false, 00:17:52.717 "nvme_io": false, 00:17:52.717 "nvme_io_md": false, 00:17:52.717 "write_zeroes": true, 00:17:52.717 "zcopy": false, 00:17:52.717 "get_zone_info": false, 00:17:52.717 "zone_management": false, 00:17:52.717 "zone_append": false, 00:17:52.717 "compare": false, 00:17:52.717 "compare_and_write": false, 00:17:52.717 "abort": false, 00:17:52.717 "seek_hole": true, 00:17:52.717 "seek_data": true, 00:17:52.717 "copy": false, 00:17:52.717 "nvme_iov_md": false 00:17:52.717 }, 00:17:52.717 "driver_specific": { 00:17:52.717 "lvol": { 00:17:52.717 "lvol_store_uuid": "c861a0d9-3818-45c7-928d-09802fac03b8", 00:17:52.717 "base_bdev": "nvme0n1", 00:17:52.717 "thin_provision": true, 00:17:52.717 "num_allocated_clusters": 0, 00:17:52.717 "snapshot": false, 00:17:52.717 "clone": false, 00:17:52.717 "esnap_clone": false 00:17:52.717 } 00:17:52.717 } 00:17:52.717 } 00:17:52.717 ]' 00:17:52.717 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:52.717 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:52.717 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:52.717 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:52.717 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:52.717 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:52.717 23:55:25 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:52.717 23:55:25 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:52.977 23:55:25 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:52.977 23:55:25 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:52.977 23:55:25 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:52.977 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:52.977 23:55:25 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4c21579c-7c94-4b59-971c-f83e31d07053 00:17:52.977 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4c21579c-7c94-4b59-971c-f83e31d07053 00:17:52.977 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:52.977 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:52.977 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:52.977 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c21579c-7c94-4b59-971c-f83e31d07053 00:17:53.241 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:53.241 { 00:17:53.241 "name": "4c21579c-7c94-4b59-971c-f83e31d07053", 00:17:53.241 "aliases": [ 00:17:53.241 "lvs/nvme0n1p0" 00:17:53.241 ], 00:17:53.241 "product_name": "Logical Volume", 00:17:53.241 "block_size": 4096, 00:17:53.241 "num_blocks": 26476544, 00:17:53.241 "uuid": "4c21579c-7c94-4b59-971c-f83e31d07053", 00:17:53.241 "assigned_rate_limits": { 00:17:53.241 "rw_ios_per_sec": 0, 00:17:53.241 "rw_mbytes_per_sec": 0, 00:17:53.241 "r_mbytes_per_sec": 0, 00:17:53.241 "w_mbytes_per_sec": 0 00:17:53.241 }, 00:17:53.241 "claimed": false, 00:17:53.241 "zoned": false, 00:17:53.241 "supported_io_types": { 00:17:53.241 "read": true, 00:17:53.241 "write": true, 00:17:53.241 "unmap": true, 00:17:53.241 "flush": false, 00:17:53.241 "reset": true, 00:17:53.241 "nvme_admin": false, 00:17:53.241 "nvme_io": false, 00:17:53.241 "nvme_io_md": false, 00:17:53.241 "write_zeroes": true, 00:17:53.241 "zcopy": false, 00:17:53.241 "get_zone_info": false, 00:17:53.241 "zone_management": false, 00:17:53.241 "zone_append": false, 00:17:53.241 "compare": false, 00:17:53.241 "compare_and_write": false, 00:17:53.241 "abort": false, 00:17:53.241 "seek_hole": true, 00:17:53.241 "seek_data": true, 00:17:53.241 "copy": false, 00:17:53.241 "nvme_iov_md": false 00:17:53.241 }, 00:17:53.241 "driver_specific": { 00:17:53.241 "lvol": { 00:17:53.241 "lvol_store_uuid": "c861a0d9-3818-45c7-928d-09802fac03b8", 00:17:53.241 "base_bdev": "nvme0n1", 00:17:53.241 "thin_provision": true, 00:17:53.241 "num_allocated_clusters": 0, 00:17:53.241 "snapshot": false, 00:17:53.241 "clone": false, 00:17:53.241 "esnap_clone": false 00:17:53.241 } 00:17:53.241 } 00:17:53.241 } 00:17:53.241 ]' 00:17:53.241 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:53.241 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:53.241 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:53.241 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:53.241 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:53.241 23:55:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:53.241 23:55:25 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:53.241 23:55:25 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:53.241 23:55:25 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4c21579c-7c94-4b59-971c-f83e31d07053 -c nvc0n1p0 --l2p_dram_limit 60 00:17:53.500 [2024-12-05 23:55:26.026656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.026711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:53.500 [2024-12-05 23:55:26.026728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:53.500 [2024-12-05 23:55:26.026737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.026794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.026806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.500 [2024-12-05 23:55:26.026818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:53.500 [2024-12-05 23:55:26.026826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.026865] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:53.500 [2024-12-05 23:55:26.027595] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:53.500 [2024-12-05 23:55:26.027626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.027635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.500 [2024-12-05 23:55:26.027646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:17:53.500 [2024-12-05 23:55:26.027655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.027726] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 658c2ff0-7e39-44ee-9a2e-0beb7170b3c3 00:17:53.500 [2024-12-05 23:55:26.029158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.029195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:53.500 [2024-12-05 23:55:26.029206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:53.500 [2024-12-05 23:55:26.029218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.036256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.036300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.500 [2024-12-05 23:55:26.036310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:17:53.500 [2024-12-05 23:55:26.036319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.036426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.036439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.500 [2024-12-05 23:55:26.036448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:53.500 [2024-12-05 23:55:26.036461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.036515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.036529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:53.500 [2024-12-05 23:55:26.036537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:53.500 [2024-12-05 23:55:26.036546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.036571] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:53.500 [2024-12-05 23:55:26.040481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.040511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.500 [2024-12-05 23:55:26.040524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.913 ms 00:17:53.500 [2024-12-05 23:55:26.040534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.040577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.040586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:53.500 [2024-12-05 23:55:26.040596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:53.500 [2024-12-05 23:55:26.040604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.040649] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:53.500 [2024-12-05 23:55:26.040828] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:53.500 [2024-12-05 23:55:26.040852] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:53.500 [2024-12-05 23:55:26.040863] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:53.500 [2024-12-05 23:55:26.040876] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:53.500 [2024-12-05 23:55:26.040886] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:53.500 [2024-12-05 23:55:26.040898] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:53.500 [2024-12-05 23:55:26.040906] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:53.500 [2024-12-05 23:55:26.040915] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:53.500 [2024-12-05 23:55:26.040922] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:53.500 [2024-12-05 23:55:26.040931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.040941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:53.500 [2024-12-05 23:55:26.040951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:53.500 [2024-12-05 23:55:26.040958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.041065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.500 [2024-12-05 23:55:26.041084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:53.500 [2024-12-05 23:55:26.041095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:53.500 [2024-12-05 23:55:26.041103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.500 [2024-12-05 23:55:26.041221] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:53.500 [2024-12-05 23:55:26.041239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:53.500 [2024-12-05 23:55:26.041251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:53.500 [2024-12-05 23:55:26.041259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.500 [2024-12-05 23:55:26.041270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:53.500 [2024-12-05 23:55:26.041277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:53.500 [2024-12-05 23:55:26.041286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:53.500 [2024-12-05 23:55:26.041294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:53.500 [2024-12-05 23:55:26.041305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:53.500 [2024-12-05 23:55:26.041312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:53.500 [2024-12-05 23:55:26.041320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:53.500 [2024-12-05 23:55:26.041326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:53.500 [2024-12-05 23:55:26.041335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:53.500 [2024-12-05 23:55:26.041342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:53.500 [2024-12-05 23:55:26.041351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:53.500 [2024-12-05 23:55:26.041357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.500 [2024-12-05 23:55:26.041368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:53.500 [2024-12-05 23:55:26.041376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:53.500 [2024-12-05 23:55:26.041384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.500 [2024-12-05 23:55:26.041391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:53.500 [2024-12-05 23:55:26.041401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:53.500 [2024-12-05 23:55:26.041408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.500 [2024-12-05 23:55:26.041416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:53.500 [2024-12-05 23:55:26.041422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:53.500 [2024-12-05 23:55:26.041431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.500 [2024-12-05 23:55:26.041438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:53.500 [2024-12-05 23:55:26.041446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:53.500 [2024-12-05 23:55:26.041453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.500 [2024-12-05 23:55:26.041461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:53.500 [2024-12-05 23:55:26.041473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:53.500 [2024-12-05 23:55:26.041482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.500 [2024-12-05 23:55:26.041489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:53.500 [2024-12-05 23:55:26.041501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:53.500 [2024-12-05 23:55:26.041523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:53.500 [2024-12-05 23:55:26.041533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:53.500 [2024-12-05 23:55:26.041539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:53.500 [2024-12-05 23:55:26.041547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:53.500 [2024-12-05 23:55:26.041554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:53.501 [2024-12-05 23:55:26.041563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:53.501 [2024-12-05 23:55:26.041570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.501 [2024-12-05 23:55:26.041579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:53.501 [2024-12-05 23:55:26.041586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:53.501 [2024-12-05 23:55:26.041594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.501 [2024-12-05 23:55:26.041600] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:53.501 [2024-12-05 23:55:26.041610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:53.501 [2024-12-05 23:55:26.041618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:53.501 [2024-12-05 23:55:26.041627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.501 [2024-12-05 23:55:26.041635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:53.501 [2024-12-05 23:55:26.041645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:53.501 [2024-12-05 23:55:26.041653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:53.501 [2024-12-05 23:55:26.041662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:53.501 [2024-12-05 23:55:26.041668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:53.501 [2024-12-05 23:55:26.041677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:53.501 [2024-12-05 23:55:26.041686] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:53.501 [2024-12-05 23:55:26.041697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:53.501 [2024-12-05 23:55:26.041706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:53.501 [2024-12-05 23:55:26.041715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:53.501 [2024-12-05 23:55:26.041723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:53.501 [2024-12-05 23:55:26.041731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:53.501 [2024-12-05 23:55:26.041738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:53.501 [2024-12-05 23:55:26.041748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:53.501 [2024-12-05 23:55:26.041759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:53.501 [2024-12-05 23:55:26.041767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:53.501 [2024-12-05 23:55:26.041774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:53.501 [2024-12-05 23:55:26.041785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:53.501 [2024-12-05 23:55:26.041793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:53.501 [2024-12-05 23:55:26.041803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:53.501 [2024-12-05 23:55:26.041810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:53.501 [2024-12-05 23:55:26.041819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:53.501 [2024-12-05 23:55:26.041828] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:53.501 [2024-12-05 23:55:26.041838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:53.501 [2024-12-05 23:55:26.041848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:53.501 [2024-12-05 23:55:26.041857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:53.501 [2024-12-05 23:55:26.041865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:53.501 [2024-12-05 23:55:26.041875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:53.501 [2024-12-05 23:55:26.041882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.501 [2024-12-05 23:55:26.041891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:53.501 [2024-12-05 23:55:26.041899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:17:53.501 [2024-12-05 23:55:26.041908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.501 [2024-12-05 23:55:26.041978] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:53.501 [2024-12-05 23:55:26.041993] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:56.782 [2024-12-05 23:55:28.849355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:28.849430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:56.782 [2024-12-05 23:55:28.849446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2807.365 ms 00:17:56.782 [2024-12-05 23:55:28.849456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:28.877528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:28.877582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.782 [2024-12-05 23:55:28.877596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.780 ms 00:17:56.782 [2024-12-05 23:55:28.877607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:28.877745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:28.877759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:56.782 [2024-12-05 23:55:28.877768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:56.782 [2024-12-05 23:55:28.877780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:28.929053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:28.929101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.782 [2024-12-05 23:55:28.929117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.224 ms 00:17:56.782 [2024-12-05 23:55:28.929129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:28.929178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:28.929190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.782 [2024-12-05 23:55:28.929199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:56.782 [2024-12-05 23:55:28.929208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:28.929666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:28.929694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.782 [2024-12-05 23:55:28.929705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:17:56.782 [2024-12-05 23:55:28.929718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:28.929882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:28.929898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.782 [2024-12-05 23:55:28.929907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:56.782 [2024-12-05 23:55:28.929918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:28.945888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:28.945921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.782 [2024-12-05 23:55:28.945932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.943 ms 00:17:56.782 [2024-12-05 23:55:28.945941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:28.958203] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:56.782 [2024-12-05 23:55:28.975213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:28.975246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:56.782 [2024-12-05 23:55:28.975263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.155 ms 00:17:56.782 [2024-12-05 23:55:28.975271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:29.029289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:29.029327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:56.782 [2024-12-05 23:55:29.029343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.975 ms 00:17:56.782 [2024-12-05 23:55:29.029352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:29.029545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:29.029564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:56.782 [2024-12-05 23:55:29.029577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:17:56.782 [2024-12-05 23:55:29.029585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:29.052607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:29.052644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:56.782 [2024-12-05 23:55:29.052658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.959 ms 00:17:56.782 [2024-12-05 23:55:29.052667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.782 [2024-12-05 23:55:29.074847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.782 [2024-12-05 23:55:29.074876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:56.782 [2024-12-05 23:55:29.074890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.134 ms 00:17:56.783 [2024-12-05 23:55:29.074898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.783 [2024-12-05 23:55:29.075491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.783 [2024-12-05 23:55:29.075513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:56.783 [2024-12-05 23:55:29.075523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:17:56.783 [2024-12-05 23:55:29.075532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.783 [2024-12-05 23:55:29.144798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.783 [2024-12-05 23:55:29.144831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:56.783 [2024-12-05 23:55:29.144849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.220 ms 00:17:56.783 [2024-12-05 23:55:29.144860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.783 [2024-12-05 23:55:29.168956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.783 [2024-12-05 23:55:29.168997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:56.783 [2024-12-05 23:55:29.169010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.007 ms 00:17:56.783 [2024-12-05 23:55:29.169019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.783 [2024-12-05 23:55:29.191654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.783 [2024-12-05 23:55:29.191696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:56.783 [2024-12-05 23:55:29.191709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.590 ms 00:17:56.783 [2024-12-05 23:55:29.191716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.783 [2024-12-05 23:55:29.214550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.783 [2024-12-05 23:55:29.214585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:56.783 [2024-12-05 23:55:29.214598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.787 ms 00:17:56.783 [2024-12-05 23:55:29.214606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.783 [2024-12-05 23:55:29.214655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.783 [2024-12-05 23:55:29.214665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:56.783 [2024-12-05 23:55:29.214681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:56.783 [2024-12-05 23:55:29.214689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.783 [2024-12-05 23:55:29.214778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.783 [2024-12-05 23:55:29.214788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.783 [2024-12-05 23:55:29.214799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:56.783 [2024-12-05 23:55:29.214807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.783 [2024-12-05 23:55:29.215858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3188.749 ms, result 0 00:17:56.783 { 00:17:56.783 "name": "ftl0", 00:17:56.783 "uuid": "658c2ff0-7e39-44ee-9a2e-0beb7170b3c3" 00:17:56.783 } 00:17:56.783 23:55:29 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:56.783 23:55:29 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:56.783 23:55:29 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:56.783 23:55:29 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:56.783 23:55:29 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:56.783 23:55:29 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:56.783 23:55:29 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:56.783 23:55:29 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:57.041 [ 00:17:57.041 { 00:17:57.041 "name": "ftl0", 00:17:57.041 "aliases": [ 00:17:57.041 "658c2ff0-7e39-44ee-9a2e-0beb7170b3c3" 00:17:57.041 ], 00:17:57.041 "product_name": "FTL disk", 00:17:57.041 "block_size": 4096, 00:17:57.041 "num_blocks": 20971520, 00:17:57.041 "uuid": "658c2ff0-7e39-44ee-9a2e-0beb7170b3c3", 00:17:57.041 "assigned_rate_limits": { 00:17:57.041 "rw_ios_per_sec": 0, 00:17:57.041 "rw_mbytes_per_sec": 0, 00:17:57.041 "r_mbytes_per_sec": 0, 00:17:57.041 "w_mbytes_per_sec": 0 00:17:57.041 }, 00:17:57.041 "claimed": false, 00:17:57.041 "zoned": false, 00:17:57.041 "supported_io_types": { 00:17:57.041 "read": true, 00:17:57.041 "write": true, 00:17:57.041 "unmap": true, 00:17:57.041 "flush": true, 00:17:57.041 "reset": false, 00:17:57.041 "nvme_admin": false, 00:17:57.041 "nvme_io": false, 00:17:57.041 "nvme_io_md": false, 00:17:57.041 "write_zeroes": true, 00:17:57.041 "zcopy": false, 00:17:57.041 "get_zone_info": false, 00:17:57.041 "zone_management": false, 00:17:57.041 "zone_append": false, 00:17:57.041 "compare": false, 00:17:57.041 "compare_and_write": false, 00:17:57.041 "abort": false, 00:17:57.041 "seek_hole": false, 00:17:57.041 "seek_data": false, 00:17:57.041 "copy": false, 00:17:57.041 "nvme_iov_md": false 00:17:57.041 }, 00:17:57.041 "driver_specific": { 00:17:57.041 "ftl": { 00:17:57.041 "base_bdev": "4c21579c-7c94-4b59-971c-f83e31d07053", 00:17:57.041 "cache": "nvc0n1p0" 00:17:57.041 } 00:17:57.041 } 00:17:57.041 } 00:17:57.041 ] 00:17:57.041 23:55:29 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:57.041 23:55:29 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:57.041 23:55:29 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:57.298 23:55:29 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:57.298 23:55:29 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:57.557 [2024-12-05 23:55:30.013177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.013234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.557 [2024-12-05 23:55:30.013248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:57.557 [2024-12-05 23:55:30.013258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.013295] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.557 [2024-12-05 23:55:30.016118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.016149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.557 [2024-12-05 23:55:30.016161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.805 ms 00:17:57.557 [2024-12-05 23:55:30.016171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.016610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.016634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.557 [2024-12-05 23:55:30.016645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:17:57.557 [2024-12-05 23:55:30.016652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.019893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.019924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.557 [2024-12-05 23:55:30.019936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:17:57.557 [2024-12-05 23:55:30.019945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.026176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.026203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:57.557 [2024-12-05 23:55:30.026216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.193 ms 00:17:57.557 [2024-12-05 23:55:30.026223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.050191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.050226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.557 [2024-12-05 23:55:30.050254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.883 ms 00:17:57.557 [2024-12-05 23:55:30.050262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.065086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.065119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.557 [2024-12-05 23:55:30.065134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.771 ms 00:17:57.557 [2024-12-05 23:55:30.065143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.065318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.065339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.557 [2024-12-05 23:55:30.065350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:17:57.557 [2024-12-05 23:55:30.065359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.087868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.087897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:57.557 [2024-12-05 23:55:30.087909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.485 ms 00:17:57.557 [2024-12-05 23:55:30.087918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.110558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.110586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:57.557 [2024-12-05 23:55:30.110598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.593 ms 00:17:57.557 [2024-12-05 23:55:30.110605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.133012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.133041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.557 [2024-12-05 23:55:30.133053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.365 ms 00:17:57.557 [2024-12-05 23:55:30.133060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.155370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.557 [2024-12-05 23:55:30.155399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.557 [2024-12-05 23:55:30.155411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.200 ms 00:17:57.557 [2024-12-05 23:55:30.155418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.557 [2024-12-05 23:55:30.155458] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.557 [2024-12-05 23:55:30.155472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.557 [2024-12-05 23:55:30.155816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.155995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.558 [2024-12-05 23:55:30.156453] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.558 [2024-12-05 23:55:30.156462] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 658c2ff0-7e39-44ee-9a2e-0beb7170b3c3 00:17:57.558 [2024-12-05 23:55:30.156470] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.558 [2024-12-05 23:55:30.156481] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.558 [2024-12-05 23:55:30.156487] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.558 [2024-12-05 23:55:30.156499] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.558 [2024-12-05 23:55:30.156506] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.558 [2024-12-05 23:55:30.156518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.558 [2024-12-05 23:55:30.156525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.558 [2024-12-05 23:55:30.156534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.558 [2024-12-05 23:55:30.156541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.558 [2024-12-05 23:55:30.156551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.558 [2024-12-05 23:55:30.156559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.558 [2024-12-05 23:55:30.156569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:17:57.558 [2024-12-05 23:55:30.156576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.558 [2024-12-05 23:55:30.169596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.558 [2024-12-05 23:55:30.169626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.558 [2024-12-05 23:55:30.169638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.985 ms 00:17:57.558 [2024-12-05 23:55:30.169646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.558 [2024-12-05 23:55:30.170047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.558 [2024-12-05 23:55:30.170068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.558 [2024-12-05 23:55:30.170079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:17:57.558 [2024-12-05 23:55:30.170087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.558 [2024-12-05 23:55:30.216155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.558 [2024-12-05 23:55:30.216187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.558 [2024-12-05 23:55:30.216200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.558 [2024-12-05 23:55:30.216208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.558 [2024-12-05 23:55:30.216280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.558 [2024-12-05 23:55:30.216290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.558 [2024-12-05 23:55:30.216300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.558 [2024-12-05 23:55:30.216308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.558 [2024-12-05 23:55:30.216398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.558 [2024-12-05 23:55:30.216411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.558 [2024-12-05 23:55:30.216422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.558 [2024-12-05 23:55:30.216430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.558 [2024-12-05 23:55:30.216458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.558 [2024-12-05 23:55:30.216467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.558 [2024-12-05 23:55:30.216476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.558 [2024-12-05 23:55:30.216485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.816 [2024-12-05 23:55:30.301284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.816 [2024-12-05 23:55:30.301334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.816 [2024-12-05 23:55:30.301348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.816 [2024-12-05 23:55:30.301357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.816 [2024-12-05 23:55:30.366506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.816 [2024-12-05 23:55:30.366551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.816 [2024-12-05 23:55:30.366564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.816 [2024-12-05 23:55:30.366573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.816 [2024-12-05 23:55:30.366679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.816 [2024-12-05 23:55:30.366691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.816 [2024-12-05 23:55:30.366704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.816 [2024-12-05 23:55:30.366712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.816 [2024-12-05 23:55:30.366785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.816 [2024-12-05 23:55:30.366795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.816 [2024-12-05 23:55:30.366806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.816 [2024-12-05 23:55:30.366813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.816 [2024-12-05 23:55:30.366956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.816 [2024-12-05 23:55:30.366988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.816 [2024-12-05 23:55:30.367000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.816 [2024-12-05 23:55:30.367011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.816 [2024-12-05 23:55:30.367070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.816 [2024-12-05 23:55:30.367083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:57.816 [2024-12-05 23:55:30.367095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.816 [2024-12-05 23:55:30.367105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.816 [2024-12-05 23:55:30.367152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.816 [2024-12-05 23:55:30.367161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.816 [2024-12-05 23:55:30.367172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.816 [2024-12-05 23:55:30.367182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.816 [2024-12-05 23:55:30.367238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.816 [2024-12-05 23:55:30.367249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.816 [2024-12-05 23:55:30.367258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.816 [2024-12-05 23:55:30.367266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.816 [2024-12-05 23:55:30.367430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.227 ms, result 0 00:17:57.816 true 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75172 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75172 ']' 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75172 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75172 00:17:57.816 killing process with pid 75172 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75172' 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75172 00:17:57.816 23:55:30 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75172 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:07.802 23:55:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:07.802 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:07.802 fio-3.35 00:18:07.802 Starting 1 thread 00:18:13.079 00:18:13.079 test: (groupid=0, jobs=1): err= 0: pid=75365: Thu Dec 5 23:55:44 2024 00:18:13.079 read: IOPS=1107, BW=73.5MiB/s (77.1MB/s)(255MiB/3461msec) 00:18:13.079 slat (nsec): min=4160, max=24261, avg=5883.31, stdev=2113.10 00:18:13.079 clat (usec): min=252, max=2577, avg=402.48, stdev=115.04 00:18:13.079 lat (usec): min=257, max=2583, avg=408.36, stdev=115.38 00:18:13.079 clat percentiles (usec): 00:18:13.079 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:18:13.079 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 396], 00:18:13.079 | 70.00th=[ 441], 80.00th=[ 486], 90.00th=[ 545], 95.00th=[ 603], 00:18:13.079 | 99.00th=[ 816], 99.50th=[ 881], 99.90th=[ 1090], 99.95th=[ 1467], 00:18:13.079 | 99.99th=[ 2573] 00:18:13.079 write: IOPS=1115, BW=74.1MiB/s (77.7MB/s)(256MiB/3457msec); 0 zone resets 00:18:13.079 slat (nsec): min=15074, max=62299, avg=22569.90, stdev=4582.38 00:18:13.079 clat (usec): min=274, max=2969, avg=454.90, stdev=146.48 00:18:13.079 lat (usec): min=302, max=2994, avg=477.47, stdev=146.31 00:18:13.079 clat percentiles (usec): 00:18:13.079 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 347], 20.00th=[ 355], 00:18:13.079 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 416], 60.00th=[ 433], 00:18:13.079 | 70.00th=[ 494], 80.00th=[ 553], 90.00th=[ 627], 95.00th=[ 676], 00:18:13.079 | 99.00th=[ 988], 99.50th=[ 1106], 99.90th=[ 1647], 99.95th=[ 2024], 00:18:13.079 | 99.99th=[ 2966] 00:18:13.079 bw ( KiB/s): min=61336, max=87312, per=100.00%, avg=77293.33, stdev=11020.76, samples=6 00:18:13.079 iops : min= 902, max= 1284, avg=1136.67, stdev=162.07, samples=6 00:18:13.079 lat (usec) : 500=77.73%, 750=20.00%, 1000=1.70% 00:18:13.079 lat (msec) : 2=0.52%, 4=0.04% 00:18:13.079 cpu : usr=99.25%, sys=0.09%, ctx=5, majf=0, minf=1169 00:18:13.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:13.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.080 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:13.080 00:18:13.080 Run status group 0 (all jobs): 00:18:13.080 READ: bw=73.5MiB/s (77.1MB/s), 73.5MiB/s-73.5MiB/s (77.1MB/s-77.1MB/s), io=255MiB (267MB), run=3461-3461msec 00:18:13.080 WRITE: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=256MiB (269MB), run=3457-3457msec 00:18:13.650 ----------------------------------------------------- 00:18:13.650 Suppressions used: 00:18:13.650 count bytes template 00:18:13.650 1 5 /usr/src/fio/parse.c 00:18:13.650 1 8 libtcmalloc_minimal.so 00:18:13.650 1 904 libcrypto.so 00:18:13.650 ----------------------------------------------------- 00:18:13.650 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:13.650 23:55:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:13.911 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:13.911 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:13.911 fio-3.35 00:18:13.911 Starting 2 threads 00:18:40.565 00:18:40.565 first_half: (groupid=0, jobs=1): err= 0: pid=75468: Thu Dec 5 23:56:10 2024 00:18:40.565 read: IOPS=2847, BW=11.1MiB/s (11.7MB/s)(255MiB/22912msec) 00:18:40.565 slat (nsec): min=3163, max=27818, avg=5032.13, stdev=888.70 00:18:40.565 clat (usec): min=600, max=343973, avg=34775.33, stdev=17731.54 00:18:40.565 lat (usec): min=605, max=343978, avg=34780.36, stdev=17731.54 00:18:40.565 clat percentiles (msec): 00:18:40.565 | 1.00th=[ 9], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 29], 00:18:40.565 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:40.565 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 56], 00:18:40.565 | 99.00th=[ 123], 99.50th=[ 153], 99.90th=[ 178], 99.95th=[ 243], 00:18:40.565 | 99.99th=[ 330] 00:18:40.565 write: IOPS=3411, BW=13.3MiB/s (14.0MB/s)(256MiB/19213msec); 0 zone resets 00:18:40.565 slat (usec): min=3, max=710, avg= 6.39, stdev= 4.75 00:18:40.565 clat (usec): min=339, max=118045, avg=10095.39, stdev=18102.13 00:18:40.565 lat (usec): min=349, max=118052, avg=10101.78, stdev=18102.13 00:18:40.565 clat percentiles (usec): 00:18:40.565 | 1.00th=[ 734], 5.00th=[ 930], 10.00th=[ 1106], 20.00th=[ 1500], 00:18:40.565 | 30.00th=[ 2409], 40.00th=[ 3458], 50.00th=[ 4490], 60.00th=[ 5342], 00:18:40.565 | 70.00th=[ 6325], 80.00th=[ 11600], 90.00th=[ 16450], 95.00th=[ 58459], 00:18:40.565 | 99.00th=[ 94897], 99.50th=[100140], 99.90th=[107480], 99.95th=[110625], 00:18:40.565 | 99.99th=[116917] 00:18:40.565 bw ( KiB/s): min= 136, max=49048, per=95.56%, avg=23831.27, stdev=14848.21, samples=22 00:18:40.565 iops : min= 34, max=12262, avg=5957.82, stdev=3712.05, samples=22 00:18:40.565 lat (usec) : 500=0.03%, 750=0.58%, 1000=2.93% 00:18:40.565 lat (msec) : 2=9.97%, 4=9.16%, 10=16.40%, 20=7.59%, 50=47.10% 00:18:40.565 lat (msec) : 100=5.01%, 250=1.21%, 500=0.02% 00:18:40.565 cpu : usr=99.40%, sys=0.13%, ctx=46, majf=0, minf=5581 00:18:40.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:40.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.565 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:40.565 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.565 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:40.565 second_half: (groupid=0, jobs=1): err= 0: pid=75469: Thu Dec 5 23:56:10 2024 00:18:40.565 read: IOPS=2828, BW=11.0MiB/s (11.6MB/s)(255MiB/23090msec) 00:18:40.565 slat (nsec): min=3146, max=58207, avg=5242.00, stdev=1092.28 00:18:40.565 clat (usec): min=619, max=355270, avg=33839.30, stdev=19485.11 00:18:40.565 lat (usec): min=627, max=355275, avg=33844.55, stdev=19485.14 00:18:40.565 clat percentiles (msec): 00:18:40.565 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 28], 00:18:40.565 | 30.00th=[ 29], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:40.565 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 48], 00:18:40.565 | 99.00th=[ 136], 99.50th=[ 161], 99.90th=[ 207], 99.95th=[ 271], 00:18:40.565 | 99.99th=[ 347] 00:18:40.565 write: IOPS=3117, BW=12.2MiB/s (12.8MB/s)(256MiB/21023msec); 0 zone resets 00:18:40.565 slat (usec): min=3, max=680, avg= 6.60, stdev= 3.77 00:18:40.565 clat (usec): min=347, max=119088, avg=11354.75, stdev=19591.51 00:18:40.565 lat (usec): min=358, max=119094, avg=11361.35, stdev=19591.55 00:18:40.565 clat percentiles (usec): 00:18:40.565 | 1.00th=[ 676], 5.00th=[ 857], 10.00th=[ 996], 20.00th=[ 1336], 00:18:40.565 | 30.00th=[ 1926], 40.00th=[ 2868], 50.00th=[ 4047], 60.00th=[ 5538], 00:18:40.565 | 70.00th=[ 7242], 80.00th=[ 13829], 90.00th=[ 31327], 95.00th=[ 60031], 00:18:40.565 | 99.00th=[ 95945], 99.50th=[102237], 99.90th=[111674], 99.95th=[114820], 00:18:40.565 | 99.99th=[117965] 00:18:40.565 bw ( KiB/s): min= 48, max=60224, per=80.85%, avg=20164.92, stdev=17025.01, samples=26 00:18:40.565 iops : min= 12, max=15056, avg=5041.23, stdev=4256.25, samples=26 00:18:40.565 lat (usec) : 500=0.03%, 750=0.97%, 1000=4.15% 00:18:40.565 lat (msec) : 2=10.34%, 4=9.50%, 10=14.04%, 20=6.90%, 50=48.17% 00:18:40.565 lat (msec) : 100=4.54%, 250=1.32%, 500=0.03% 00:18:40.565 cpu : usr=99.23%, sys=0.10%, ctx=44, majf=0, minf=5534 00:18:40.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:40.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.565 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:40.565 issued rwts: total=65313,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.565 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:40.565 00:18:40.565 Run status group 0 (all jobs): 00:18:40.565 READ: bw=22.1MiB/s (23.2MB/s), 11.0MiB/s-11.1MiB/s (11.6MB/s-11.7MB/s), io=510MiB (535MB), run=22912-23090msec 00:18:40.565 WRITE: bw=24.4MiB/s (25.5MB/s), 12.2MiB/s-13.3MiB/s (12.8MB/s-14.0MB/s), io=512MiB (537MB), run=19213-21023msec 00:18:40.565 ----------------------------------------------------- 00:18:40.565 Suppressions used: 00:18:40.565 count bytes template 00:18:40.565 2 10 /usr/src/fio/parse.c 00:18:40.565 2 192 /usr/src/fio/iolog.c 00:18:40.565 1 8 libtcmalloc_minimal.so 00:18:40.565 1 904 libcrypto.so 00:18:40.565 ----------------------------------------------------- 00:18:40.565 00:18:40.565 23:56:13 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:40.565 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.565 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:40.565 23:56:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:40.565 23:56:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:40.566 23:56:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:40.825 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:40.825 fio-3.35 00:18:40.825 Starting 1 thread 00:18:55.756 00:18:55.757 test: (groupid=0, jobs=1): err= 0: pid=75772: Thu Dec 5 23:56:26 2024 00:18:55.757 read: IOPS=7967, BW=31.1MiB/s (32.6MB/s)(255MiB/8183msec) 00:18:55.757 slat (nsec): min=3104, max=20710, avg=4107.52, stdev=1084.00 00:18:55.757 clat (usec): min=536, max=31710, avg=16055.73, stdev=1831.79 00:18:55.757 lat (usec): min=541, max=31715, avg=16059.84, stdev=1831.84 00:18:55.757 clat percentiles (usec): 00:18:55.757 | 1.00th=[14746], 5.00th=[15008], 10.00th=[15139], 20.00th=[15401], 00:18:55.757 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15664], 00:18:55.757 | 70.00th=[15795], 80.00th=[15926], 90.00th=[17171], 95.00th=[20055], 00:18:55.757 | 99.00th=[24249], 99.50th=[25560], 99.90th=[29230], 99.95th=[30016], 00:18:55.757 | 99.99th=[31327] 00:18:55.757 write: IOPS=16.0k, BW=62.5MiB/s (65.5MB/s)(256MiB/4096msec); 0 zone resets 00:18:55.757 slat (usec): min=3, max=559, avg= 6.60, stdev= 3.15 00:18:55.757 clat (usec): min=436, max=51203, avg=7956.72, stdev=10234.40 00:18:55.757 lat (usec): min=442, max=51210, avg=7963.32, stdev=10234.43 00:18:55.757 clat percentiles (usec): 00:18:55.757 | 1.00th=[ 635], 5.00th=[ 717], 10.00th=[ 783], 20.00th=[ 955], 00:18:55.757 | 30.00th=[ 1123], 40.00th=[ 1532], 50.00th=[ 4883], 60.00th=[ 5669], 00:18:55.757 | 70.00th=[ 7046], 80.00th=[ 8717], 90.00th=[29230], 95.00th=[31327], 00:18:55.757 | 99.00th=[36963], 99.50th=[39060], 99.90th=[43254], 99.95th=[44303], 00:18:55.757 | 99.99th=[47449] 00:18:55.757 bw ( KiB/s): min= 8280, max=90720, per=91.02%, avg=58254.22, stdev=23038.20, samples=9 00:18:55.757 iops : min= 2070, max=22680, avg=14563.56, stdev=5759.55, samples=9 00:18:55.757 lat (usec) : 500=0.01%, 750=3.70%, 1000=7.67% 00:18:55.757 lat (msec) : 2=9.18%, 4=0.88%, 10=20.45%, 20=47.54%, 50=10.57% 00:18:55.757 lat (msec) : 100=0.01% 00:18:55.757 cpu : usr=99.10%, sys=0.20%, ctx=27, majf=0, minf=5565 00:18:55.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:55.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.757 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.757 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.757 00:18:55.757 Run status group 0 (all jobs): 00:18:55.757 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=255MiB (267MB), run=8183-8183msec 00:18:55.757 WRITE: bw=62.5MiB/s (65.5MB/s), 62.5MiB/s-62.5MiB/s (65.5MB/s-65.5MB/s), io=256MiB (268MB), run=4096-4096msec 00:18:55.757 ----------------------------------------------------- 00:18:55.757 Suppressions used: 00:18:55.757 count bytes template 00:18:55.757 1 5 /usr/src/fio/parse.c 00:18:55.757 2 192 /usr/src/fio/iolog.c 00:18:55.757 1 8 libtcmalloc_minimal.so 00:18:55.757 1 904 libcrypto.so 00:18:55.757 ----------------------------------------------------- 00:18:55.757 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:56.017 Remove shared memory files 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57158 /dev/shm/spdk_tgt_trace.pid74094 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:56.017 00:18:56.017 real 1m6.068s 00:18:56.017 user 2m20.560s 00:18:56.017 sys 0m9.852s 00:18:56.017 ************************************ 00:18:56.017 END TEST ftl_fio_basic 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.017 23:56:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:56.017 ************************************ 00:18:56.017 23:56:28 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:56.017 23:56:28 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:56.017 23:56:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.017 23:56:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:56.017 ************************************ 00:18:56.017 START TEST ftl_bdevperf 00:18:56.017 ************************************ 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:56.018 * Looking for test storage... 00:18:56.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.018 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:56.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.278 --rc genhtml_branch_coverage=1 00:18:56.278 --rc genhtml_function_coverage=1 00:18:56.278 --rc genhtml_legend=1 00:18:56.278 --rc geninfo_all_blocks=1 00:18:56.278 --rc geninfo_unexecuted_blocks=1 00:18:56.278 00:18:56.278 ' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:56.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.278 --rc genhtml_branch_coverage=1 00:18:56.278 --rc genhtml_function_coverage=1 00:18:56.278 --rc genhtml_legend=1 00:18:56.278 --rc geninfo_all_blocks=1 00:18:56.278 --rc geninfo_unexecuted_blocks=1 00:18:56.278 00:18:56.278 ' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:56.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.278 --rc genhtml_branch_coverage=1 00:18:56.278 --rc genhtml_function_coverage=1 00:18:56.278 --rc genhtml_legend=1 00:18:56.278 --rc geninfo_all_blocks=1 00:18:56.278 --rc geninfo_unexecuted_blocks=1 00:18:56.278 00:18:56.278 ' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:56.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.278 --rc genhtml_branch_coverage=1 00:18:56.278 --rc genhtml_function_coverage=1 00:18:56.278 --rc genhtml_legend=1 00:18:56.278 --rc geninfo_all_blocks=1 00:18:56.278 --rc geninfo_unexecuted_blocks=1 00:18:56.278 00:18:56.278 ' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:56.278 23:56:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75999 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75999 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75999 ']' 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.279 23:56:28 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:56.279 [2024-12-05 23:56:28.814069] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:18:56.279 [2024-12-05 23:56:28.814185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75999 ] 00:18:56.279 [2024-12-05 23:56:28.974763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.540 [2024-12-05 23:56:29.080782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.113 23:56:29 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.113 23:56:29 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:57.113 23:56:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:57.113 23:56:29 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:57.113 23:56:29 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:57.113 23:56:29 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:57.113 23:56:29 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:57.113 23:56:29 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:57.375 23:56:29 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:57.375 23:56:29 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:57.375 23:56:29 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:57.375 23:56:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:57.375 23:56:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:57.375 23:56:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:57.375 23:56:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:57.375 23:56:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:57.638 { 00:18:57.638 "name": "nvme0n1", 00:18:57.638 "aliases": [ 00:18:57.638 "255d57a8-cefc-4275-a6f1-06c365bc8ba1" 00:18:57.638 ], 00:18:57.638 "product_name": "NVMe disk", 00:18:57.638 "block_size": 4096, 00:18:57.638 "num_blocks": 1310720, 00:18:57.638 "uuid": "255d57a8-cefc-4275-a6f1-06c365bc8ba1", 00:18:57.638 "numa_id": -1, 00:18:57.638 "assigned_rate_limits": { 00:18:57.638 "rw_ios_per_sec": 0, 00:18:57.638 "rw_mbytes_per_sec": 0, 00:18:57.638 "r_mbytes_per_sec": 0, 00:18:57.638 "w_mbytes_per_sec": 0 00:18:57.638 }, 00:18:57.638 "claimed": true, 00:18:57.638 "claim_type": "read_many_write_one", 00:18:57.638 "zoned": false, 00:18:57.638 "supported_io_types": { 00:18:57.638 "read": true, 00:18:57.638 "write": true, 00:18:57.638 "unmap": true, 00:18:57.638 "flush": true, 00:18:57.638 "reset": true, 00:18:57.638 "nvme_admin": true, 00:18:57.638 "nvme_io": true, 00:18:57.638 "nvme_io_md": false, 00:18:57.638 "write_zeroes": true, 00:18:57.638 "zcopy": false, 00:18:57.638 "get_zone_info": false, 00:18:57.638 "zone_management": false, 00:18:57.638 "zone_append": false, 00:18:57.638 "compare": true, 00:18:57.638 "compare_and_write": false, 00:18:57.638 "abort": true, 00:18:57.638 "seek_hole": false, 00:18:57.638 "seek_data": false, 00:18:57.638 "copy": true, 00:18:57.638 "nvme_iov_md": false 00:18:57.638 }, 00:18:57.638 "driver_specific": { 00:18:57.638 "nvme": [ 00:18:57.638 { 00:18:57.638 "pci_address": "0000:00:11.0", 00:18:57.638 "trid": { 00:18:57.638 "trtype": "PCIe", 00:18:57.638 "traddr": "0000:00:11.0" 00:18:57.638 }, 00:18:57.638 "ctrlr_data": { 00:18:57.638 "cntlid": 0, 00:18:57.638 "vendor_id": "0x1b36", 00:18:57.638 "model_number": "QEMU NVMe Ctrl", 00:18:57.638 "serial_number": "12341", 00:18:57.638 "firmware_revision": "8.0.0", 00:18:57.638 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:57.638 "oacs": { 00:18:57.638 "security": 0, 00:18:57.638 "format": 1, 00:18:57.638 "firmware": 0, 00:18:57.638 "ns_manage": 1 00:18:57.638 }, 00:18:57.638 "multi_ctrlr": false, 00:18:57.638 "ana_reporting": false 00:18:57.638 }, 00:18:57.638 "vs": { 00:18:57.638 "nvme_version": "1.4" 00:18:57.638 }, 00:18:57.638 "ns_data": { 00:18:57.638 "id": 1, 00:18:57.638 "can_share": false 00:18:57.638 } 00:18:57.638 } 00:18:57.638 ], 00:18:57.638 "mp_policy": "active_passive" 00:18:57.638 } 00:18:57.638 } 00:18:57.638 ]' 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:57.638 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:57.900 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=c861a0d9-3818-45c7-928d-09802fac03b8 00:18:57.900 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:57.900 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c861a0d9-3818-45c7-928d-09802fac03b8 00:18:58.159 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:58.417 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f61e7c7b-f2da-43f2-8f58-a2fdc09c3f76 00:18:58.417 23:56:30 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f61e7c7b-f2da-43f2-8f58-a2fdc09c3f76 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:58.417 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:58.674 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:58.674 { 00:18:58.674 "name": "5f9ad43a-a2bf-4d48-a581-258ea690aba9", 00:18:58.674 "aliases": [ 00:18:58.674 "lvs/nvme0n1p0" 00:18:58.674 ], 00:18:58.674 "product_name": "Logical Volume", 00:18:58.674 "block_size": 4096, 00:18:58.674 "num_blocks": 26476544, 00:18:58.674 "uuid": "5f9ad43a-a2bf-4d48-a581-258ea690aba9", 00:18:58.674 "assigned_rate_limits": { 00:18:58.674 "rw_ios_per_sec": 0, 00:18:58.674 "rw_mbytes_per_sec": 0, 00:18:58.674 "r_mbytes_per_sec": 0, 00:18:58.674 "w_mbytes_per_sec": 0 00:18:58.674 }, 00:18:58.674 "claimed": false, 00:18:58.674 "zoned": false, 00:18:58.674 "supported_io_types": { 00:18:58.674 "read": true, 00:18:58.674 "write": true, 00:18:58.674 "unmap": true, 00:18:58.674 "flush": false, 00:18:58.674 "reset": true, 00:18:58.674 "nvme_admin": false, 00:18:58.674 "nvme_io": false, 00:18:58.674 "nvme_io_md": false, 00:18:58.674 "write_zeroes": true, 00:18:58.674 "zcopy": false, 00:18:58.674 "get_zone_info": false, 00:18:58.674 "zone_management": false, 00:18:58.674 "zone_append": false, 00:18:58.674 "compare": false, 00:18:58.674 "compare_and_write": false, 00:18:58.674 "abort": false, 00:18:58.674 "seek_hole": true, 00:18:58.674 "seek_data": true, 00:18:58.674 "copy": false, 00:18:58.674 "nvme_iov_md": false 00:18:58.674 }, 00:18:58.674 "driver_specific": { 00:18:58.674 "lvol": { 00:18:58.674 "lvol_store_uuid": "f61e7c7b-f2da-43f2-8f58-a2fdc09c3f76", 00:18:58.674 "base_bdev": "nvme0n1", 00:18:58.674 "thin_provision": true, 00:18:58.674 "num_allocated_clusters": 0, 00:18:58.674 "snapshot": false, 00:18:58.674 "clone": false, 00:18:58.674 "esnap_clone": false 00:18:58.674 } 00:18:58.674 } 00:18:58.674 } 00:18:58.674 ]' 00:18:58.674 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:58.674 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:58.674 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:58.932 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:58.932 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:58.932 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:58.932 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:58.932 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:58.932 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:59.189 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:59.189 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:59.189 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:59.189 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:59.189 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:59.190 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:59.190 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:59.190 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:59.190 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:59.190 { 00:18:59.190 "name": "5f9ad43a-a2bf-4d48-a581-258ea690aba9", 00:18:59.190 "aliases": [ 00:18:59.190 "lvs/nvme0n1p0" 00:18:59.190 ], 00:18:59.190 "product_name": "Logical Volume", 00:18:59.190 "block_size": 4096, 00:18:59.190 "num_blocks": 26476544, 00:18:59.190 "uuid": "5f9ad43a-a2bf-4d48-a581-258ea690aba9", 00:18:59.190 "assigned_rate_limits": { 00:18:59.190 "rw_ios_per_sec": 0, 00:18:59.190 "rw_mbytes_per_sec": 0, 00:18:59.190 "r_mbytes_per_sec": 0, 00:18:59.190 "w_mbytes_per_sec": 0 00:18:59.190 }, 00:18:59.190 "claimed": false, 00:18:59.190 "zoned": false, 00:18:59.190 "supported_io_types": { 00:18:59.190 "read": true, 00:18:59.190 "write": true, 00:18:59.190 "unmap": true, 00:18:59.190 "flush": false, 00:18:59.190 "reset": true, 00:18:59.190 "nvme_admin": false, 00:18:59.190 "nvme_io": false, 00:18:59.190 "nvme_io_md": false, 00:18:59.190 "write_zeroes": true, 00:18:59.190 "zcopy": false, 00:18:59.190 "get_zone_info": false, 00:18:59.190 "zone_management": false, 00:18:59.190 "zone_append": false, 00:18:59.190 "compare": false, 00:18:59.190 "compare_and_write": false, 00:18:59.190 "abort": false, 00:18:59.190 "seek_hole": true, 00:18:59.190 "seek_data": true, 00:18:59.190 "copy": false, 00:18:59.190 "nvme_iov_md": false 00:18:59.190 }, 00:18:59.190 "driver_specific": { 00:18:59.190 "lvol": { 00:18:59.190 "lvol_store_uuid": "f61e7c7b-f2da-43f2-8f58-a2fdc09c3f76", 00:18:59.190 "base_bdev": "nvme0n1", 00:18:59.190 "thin_provision": true, 00:18:59.190 "num_allocated_clusters": 0, 00:18:59.190 "snapshot": false, 00:18:59.190 "clone": false, 00:18:59.190 "esnap_clone": false 00:18:59.190 } 00:18:59.190 } 00:18:59.190 } 00:18:59.190 ]' 00:18:59.190 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:59.190 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:59.190 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:59.448 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:59.448 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:59.448 23:56:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:59.448 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:59.448 23:56:31 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:59.448 23:56:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:59.448 23:56:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:59.448 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:59.448 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:59.448 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:59.448 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:59.448 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5f9ad43a-a2bf-4d48-a581-258ea690aba9 00:18:59.706 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:59.706 { 00:18:59.706 "name": "5f9ad43a-a2bf-4d48-a581-258ea690aba9", 00:18:59.706 "aliases": [ 00:18:59.706 "lvs/nvme0n1p0" 00:18:59.706 ], 00:18:59.706 "product_name": "Logical Volume", 00:18:59.707 "block_size": 4096, 00:18:59.707 "num_blocks": 26476544, 00:18:59.707 "uuid": "5f9ad43a-a2bf-4d48-a581-258ea690aba9", 00:18:59.707 "assigned_rate_limits": { 00:18:59.707 "rw_ios_per_sec": 0, 00:18:59.707 "rw_mbytes_per_sec": 0, 00:18:59.707 "r_mbytes_per_sec": 0, 00:18:59.707 "w_mbytes_per_sec": 0 00:18:59.707 }, 00:18:59.707 "claimed": false, 00:18:59.707 "zoned": false, 00:18:59.707 "supported_io_types": { 00:18:59.707 "read": true, 00:18:59.707 "write": true, 00:18:59.707 "unmap": true, 00:18:59.707 "flush": false, 00:18:59.707 "reset": true, 00:18:59.707 "nvme_admin": false, 00:18:59.707 "nvme_io": false, 00:18:59.707 "nvme_io_md": false, 00:18:59.707 "write_zeroes": true, 00:18:59.707 "zcopy": false, 00:18:59.707 "get_zone_info": false, 00:18:59.707 "zone_management": false, 00:18:59.707 "zone_append": false, 00:18:59.707 "compare": false, 00:18:59.707 "compare_and_write": false, 00:18:59.707 "abort": false, 00:18:59.707 "seek_hole": true, 00:18:59.707 "seek_data": true, 00:18:59.707 "copy": false, 00:18:59.707 "nvme_iov_md": false 00:18:59.707 }, 00:18:59.707 "driver_specific": { 00:18:59.707 "lvol": { 00:18:59.707 "lvol_store_uuid": "f61e7c7b-f2da-43f2-8f58-a2fdc09c3f76", 00:18:59.707 "base_bdev": "nvme0n1", 00:18:59.707 "thin_provision": true, 00:18:59.707 "num_allocated_clusters": 0, 00:18:59.707 "snapshot": false, 00:18:59.707 "clone": false, 00:18:59.707 "esnap_clone": false 00:18:59.707 } 00:18:59.707 } 00:18:59.707 } 00:18:59.707 ]' 00:18:59.707 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:59.707 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:59.707 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:59.966 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:59.966 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:59.966 23:56:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:59.966 23:56:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:59.966 23:56:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5f9ad43a-a2bf-4d48-a581-258ea690aba9 -c nvc0n1p0 --l2p_dram_limit 20 00:18:59.966 [2024-12-05 23:56:32.633746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.633795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:59.966 [2024-12-05 23:56:32.633809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.966 [2024-12-05 23:56:32.633819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.633882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.633894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.966 [2024-12-05 23:56:32.633903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:59.966 [2024-12-05 23:56:32.633912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.633928] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:59.966 [2024-12-05 23:56:32.634726] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:59.966 [2024-12-05 23:56:32.634744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.634753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.966 [2024-12-05 23:56:32.634762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:18:59.966 [2024-12-05 23:56:32.634771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.634799] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4b1abe5f-2098-44c5-9711-175b8986601d 00:18:59.966 [2024-12-05 23:56:32.635855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.635885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:59.966 [2024-12-05 23:56:32.635899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:59.966 [2024-12-05 23:56:32.635906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.641144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.641172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.966 [2024-12-05 23:56:32.641184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:18:59.966 [2024-12-05 23:56:32.641196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.641424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.641437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.966 [2024-12-05 23:56:32.641450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:18:59.966 [2024-12-05 23:56:32.641457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.641495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.641503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:59.966 [2024-12-05 23:56:32.641512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:59.966 [2024-12-05 23:56:32.641520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.641542] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:59.966 [2024-12-05 23:56:32.645129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.645161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.966 [2024-12-05 23:56:32.645170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.596 ms 00:18:59.966 [2024-12-05 23:56:32.645182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.645213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.645223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:59.966 [2024-12-05 23:56:32.645230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:59.966 [2024-12-05 23:56:32.645239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.645259] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:59.966 [2024-12-05 23:56:32.645401] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:59.966 [2024-12-05 23:56:32.645413] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:59.966 [2024-12-05 23:56:32.645425] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:59.966 [2024-12-05 23:56:32.645435] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:59.966 [2024-12-05 23:56:32.645446] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:59.966 [2024-12-05 23:56:32.645453] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:59.966 [2024-12-05 23:56:32.645462] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:59.966 [2024-12-05 23:56:32.645469] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:59.966 [2024-12-05 23:56:32.645479] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:59.966 [2024-12-05 23:56:32.645488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.645497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:59.966 [2024-12-05 23:56:32.645504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:18:59.966 [2024-12-05 23:56:32.645512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.645609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.966 [2024-12-05 23:56:32.645620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:59.966 [2024-12-05 23:56:32.645627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:59.966 [2024-12-05 23:56:32.645637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.966 [2024-12-05 23:56:32.645725] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:59.966 [2024-12-05 23:56:32.645738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:59.966 [2024-12-05 23:56:32.645745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.966 [2024-12-05 23:56:32.645754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.966 [2024-12-05 23:56:32.645761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:59.966 [2024-12-05 23:56:32.645769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:59.966 [2024-12-05 23:56:32.645776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:59.966 [2024-12-05 23:56:32.645784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:59.966 [2024-12-05 23:56:32.645791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:59.966 [2024-12-05 23:56:32.645798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.966 [2024-12-05 23:56:32.645805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:59.966 [2024-12-05 23:56:32.645820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:59.966 [2024-12-05 23:56:32.645827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.966 [2024-12-05 23:56:32.645835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:59.966 [2024-12-05 23:56:32.645842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:59.966 [2024-12-05 23:56:32.645851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.966 [2024-12-05 23:56:32.645857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:59.967 [2024-12-05 23:56:32.645865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:59.967 [2024-12-05 23:56:32.645871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.967 [2024-12-05 23:56:32.645880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:59.967 [2024-12-05 23:56:32.645887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:59.967 [2024-12-05 23:56:32.645896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.967 [2024-12-05 23:56:32.645902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:59.967 [2024-12-05 23:56:32.645910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:59.967 [2024-12-05 23:56:32.645923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.967 [2024-12-05 23:56:32.645931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:59.967 [2024-12-05 23:56:32.645938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:59.967 [2024-12-05 23:56:32.645946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.967 [2024-12-05 23:56:32.645953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:59.967 [2024-12-05 23:56:32.645961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:59.967 [2024-12-05 23:56:32.645981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.967 [2024-12-05 23:56:32.645991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:59.967 [2024-12-05 23:56:32.645998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:59.967 [2024-12-05 23:56:32.646006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.967 [2024-12-05 23:56:32.646012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:59.967 [2024-12-05 23:56:32.646020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:59.967 [2024-12-05 23:56:32.646027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.967 [2024-12-05 23:56:32.646038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:59.967 [2024-12-05 23:56:32.646045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:59.967 [2024-12-05 23:56:32.646053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.967 [2024-12-05 23:56:32.646059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:59.967 [2024-12-05 23:56:32.646068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:59.967 [2024-12-05 23:56:32.646074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.967 [2024-12-05 23:56:32.646082] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:59.967 [2024-12-05 23:56:32.646090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:59.967 [2024-12-05 23:56:32.646099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.967 [2024-12-05 23:56:32.646106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.967 [2024-12-05 23:56:32.646116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:59.967 [2024-12-05 23:56:32.646123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:59.967 [2024-12-05 23:56:32.646131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:59.967 [2024-12-05 23:56:32.646138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:59.967 [2024-12-05 23:56:32.646145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:59.967 [2024-12-05 23:56:32.646152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:59.967 [2024-12-05 23:56:32.646161] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:59.967 [2024-12-05 23:56:32.646170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.967 [2024-12-05 23:56:32.646180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:59.967 [2024-12-05 23:56:32.646187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:59.967 [2024-12-05 23:56:32.646196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:59.967 [2024-12-05 23:56:32.646203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:59.967 [2024-12-05 23:56:32.646211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:59.967 [2024-12-05 23:56:32.646218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:59.967 [2024-12-05 23:56:32.646226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:59.967 [2024-12-05 23:56:32.646233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:59.967 [2024-12-05 23:56:32.646244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:59.967 [2024-12-05 23:56:32.646251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:59.967 [2024-12-05 23:56:32.646259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:59.967 [2024-12-05 23:56:32.646266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:59.967 [2024-12-05 23:56:32.646275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:59.967 [2024-12-05 23:56:32.646282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:59.967 [2024-12-05 23:56:32.646291] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:59.967 [2024-12-05 23:56:32.646299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.967 [2024-12-05 23:56:32.646310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:59.967 [2024-12-05 23:56:32.646317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:59.967 [2024-12-05 23:56:32.646326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:59.967 [2024-12-05 23:56:32.646333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:59.967 [2024-12-05 23:56:32.646342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.967 [2024-12-05 23:56:32.646349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:59.967 [2024-12-05 23:56:32.646358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:18:59.967 [2024-12-05 23:56:32.646365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.967 [2024-12-05 23:56:32.646403] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:59.967 [2024-12-05 23:56:32.646414] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:04.165 [2024-12-05 23:56:36.225255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.165 [2024-12-05 23:56:36.225337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:04.165 [2024-12-05 23:56:36.225356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3578.836 ms 00:19:04.165 [2024-12-05 23:56:36.225366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.165 [2024-12-05 23:56:36.255264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.165 [2024-12-05 23:56:36.255323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.165 [2024-12-05 23:56:36.255339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.564 ms 00:19:04.165 [2024-12-05 23:56:36.255348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.165 [2024-12-05 23:56:36.255504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.165 [2024-12-05 23:56:36.255515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.165 [2024-12-05 23:56:36.255530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:04.165 [2024-12-05 23:56:36.255539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.165 [2024-12-05 23:56:36.299604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.165 [2024-12-05 23:56:36.299659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.165 [2024-12-05 23:56:36.299675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.028 ms 00:19:04.165 [2024-12-05 23:56:36.299684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.165 [2024-12-05 23:56:36.299729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.165 [2024-12-05 23:56:36.299739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.165 [2024-12-05 23:56:36.299750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:04.165 [2024-12-05 23:56:36.299761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.165 [2024-12-05 23:56:36.300346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.165 [2024-12-05 23:56:36.300369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.165 [2024-12-05 23:56:36.300381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:19:04.165 [2024-12-05 23:56:36.300389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.165 [2024-12-05 23:56:36.300512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.165 [2024-12-05 23:56:36.300522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.165 [2024-12-05 23:56:36.300535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:04.165 [2024-12-05 23:56:36.300543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.165 [2024-12-05 23:56:36.316094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.165 [2024-12-05 23:56:36.316138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.165 [2024-12-05 23:56:36.316151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.530 ms 00:19:04.165 [2024-12-05 23:56:36.316169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.165 [2024-12-05 23:56:36.329444] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:04.165 [2024-12-05 23:56:36.337017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.165 [2024-12-05 23:56:36.337070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:04.165 [2024-12-05 23:56:36.337082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.768 ms 00:19:04.165 [2024-12-05 23:56:36.337093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.450003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.450077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:04.166 [2024-12-05 23:56:36.450093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.875 ms 00:19:04.166 [2024-12-05 23:56:36.450105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.450309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.450326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.166 [2024-12-05 23:56:36.450336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:19:04.166 [2024-12-05 23:56:36.450351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.477461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.477523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:04.166 [2024-12-05 23:56:36.477535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.058 ms 00:19:04.166 [2024-12-05 23:56:36.477546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.503269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.503323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:04.166 [2024-12-05 23:56:36.503336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.674 ms 00:19:04.166 [2024-12-05 23:56:36.503346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.504192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.504228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.166 [2024-12-05 23:56:36.504239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:19:04.166 [2024-12-05 23:56:36.504249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.598692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.598758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:04.166 [2024-12-05 23:56:36.598773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.371 ms 00:19:04.166 [2024-12-05 23:56:36.598784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.627342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.627415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:04.166 [2024-12-05 23:56:36.627431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.466 ms 00:19:04.166 [2024-12-05 23:56:36.627444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.654279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.654344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:04.166 [2024-12-05 23:56:36.654357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.785 ms 00:19:04.166 [2024-12-05 23:56:36.654368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.681009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.681072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:04.166 [2024-12-05 23:56:36.681084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.598 ms 00:19:04.166 [2024-12-05 23:56:36.681094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.681145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.681160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:04.166 [2024-12-05 23:56:36.681170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:04.166 [2024-12-05 23:56:36.681180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.681273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.166 [2024-12-05 23:56:36.681287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:04.166 [2024-12-05 23:56:36.681297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:04.166 [2024-12-05 23:56:36.681308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.166 [2024-12-05 23:56:36.682481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4048.218 ms, result 0 00:19:04.166 { 00:19:04.166 "name": "ftl0", 00:19:04.166 "uuid": "4b1abe5f-2098-44c5-9711-175b8986601d" 00:19:04.166 } 00:19:04.166 23:56:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:04.166 23:56:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:04.166 23:56:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:04.428 23:56:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:04.428 [2024-12-05 23:56:37.014564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:04.428 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:04.428 Zero copy mechanism will not be used. 00:19:04.428 Running I/O for 4 seconds... 00:19:06.757 694.00 IOPS, 46.09 MiB/s [2024-12-05T23:56:40.037Z] 784.50 IOPS, 52.10 MiB/s [2024-12-05T23:56:41.418Z] 826.67 IOPS, 54.90 MiB/s [2024-12-05T23:56:41.418Z] 782.25 IOPS, 51.95 MiB/s 00:19:08.709 Latency(us) 00:19:08.709 [2024-12-05T23:56:41.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.710 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:08.710 ftl0 : 4.00 781.98 51.93 0.00 0.00 1355.56 272.54 3705.30 00:19:08.710 [2024-12-05T23:56:41.419Z] =================================================================================================================== 00:19:08.710 [2024-12-05T23:56:41.419Z] Total : 781.98 51.93 0.00 0.00 1355.56 272.54 3705.30 00:19:08.710 [2024-12-05 23:56:41.026283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:08.710 { 00:19:08.710 "results": [ 00:19:08.710 { 00:19:08.710 "job": "ftl0", 00:19:08.710 "core_mask": "0x1", 00:19:08.710 "workload": "randwrite", 00:19:08.710 "status": "finished", 00:19:08.710 "queue_depth": 1, 00:19:08.710 "io_size": 69632, 00:19:08.710 "runtime": 4.002664, 00:19:08.710 "iops": 781.9792018515668, 00:19:08.710 "mibps": 51.92830637295561, 00:19:08.710 "io_failed": 0, 00:19:08.710 "io_timeout": 0, 00:19:08.710 "avg_latency_us": 1355.5603283362004, 00:19:08.710 "min_latency_us": 272.54153846153844, 00:19:08.710 "max_latency_us": 3705.3046153846153 00:19:08.710 } 00:19:08.710 ], 00:19:08.710 "core_count": 1 00:19:08.710 } 00:19:08.710 23:56:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:08.710 [2024-12-05 23:56:41.142709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:08.710 Running I/O for 4 seconds... 00:19:10.587 5561.00 IOPS, 21.72 MiB/s [2024-12-05T23:56:44.240Z] 5091.50 IOPS, 19.89 MiB/s [2024-12-05T23:56:45.184Z] 5060.33 IOPS, 19.77 MiB/s [2024-12-05T23:56:45.442Z] 4907.50 IOPS, 19.17 MiB/s 00:19:12.733 Latency(us) 00:19:12.733 [2024-12-05T23:56:45.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.733 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:12.733 ftl0 : 4.04 4895.34 19.12 0.00 0.00 26033.28 463.16 138734.67 00:19:12.733 [2024-12-05T23:56:45.442Z] =================================================================================================================== 00:19:12.733 [2024-12-05T23:56:45.443Z] Total : 4895.34 19.12 0.00 0.00 26033.28 0.00 138734.67 00:19:12.734 [2024-12-05 23:56:45.189090] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:12.734 { 00:19:12.734 "results": [ 00:19:12.734 { 00:19:12.734 "job": "ftl0", 00:19:12.734 "core_mask": "0x1", 00:19:12.734 "workload": "randwrite", 00:19:12.734 "status": "finished", 00:19:12.734 "queue_depth": 128, 00:19:12.734 "io_size": 4096, 00:19:12.734 "runtime": 4.036082, 00:19:12.734 "iops": 4895.341571355587, 00:19:12.734 "mibps": 19.12242801310776, 00:19:12.734 "io_failed": 0, 00:19:12.734 "io_timeout": 0, 00:19:12.734 "avg_latency_us": 26033.283154476863, 00:19:12.734 "min_latency_us": 463.1630769230769, 00:19:12.734 "max_latency_us": 138734.67076923078 00:19:12.734 } 00:19:12.734 ], 00:19:12.734 "core_count": 1 00:19:12.734 } 00:19:12.734 23:56:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:12.734 [2024-12-05 23:56:45.306283] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:12.734 Running I/O for 4 seconds... 00:19:14.609 4903.00 IOPS, 19.15 MiB/s [2024-12-05T23:56:48.702Z] 4872.50 IOPS, 19.03 MiB/s [2024-12-05T23:56:49.638Z] 4794.33 IOPS, 18.73 MiB/s [2024-12-05T23:56:49.638Z] 4737.25 IOPS, 18.50 MiB/s 00:19:16.929 Latency(us) 00:19:16.929 [2024-12-05T23:56:49.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.929 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:16.929 Verification LBA range: start 0x0 length 0x1400000 00:19:16.929 ftl0 : 4.01 4752.56 18.56 0.00 0.00 26860.66 337.13 39119.95 00:19:16.929 [2024-12-05T23:56:49.638Z] =================================================================================================================== 00:19:16.929 [2024-12-05T23:56:49.638Z] Total : 4752.56 18.56 0.00 0.00 26860.66 0.00 39119.95 00:19:16.929 [2024-12-05 23:56:49.335023] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:16.929 { 00:19:16.929 "results": [ 00:19:16.929 { 00:19:16.929 "job": "ftl0", 00:19:16.929 "core_mask": "0x1", 00:19:16.929 "workload": "verify", 00:19:16.929 "status": "finished", 00:19:16.929 "verify_range": { 00:19:16.929 "start": 0, 00:19:16.929 "length": 20971520 00:19:16.929 }, 00:19:16.929 "queue_depth": 128, 00:19:16.929 "io_size": 4096, 00:19:16.929 "runtime": 4.012995, 00:19:16.929 "iops": 4752.5601203091455, 00:19:16.929 "mibps": 18.5646879699576, 00:19:16.929 "io_failed": 0, 00:19:16.929 "io_timeout": 0, 00:19:16.929 "avg_latency_us": 26860.660898296333, 00:19:16.929 "min_latency_us": 337.1323076923077, 00:19:16.929 "max_latency_us": 39119.95076923077 00:19:16.929 } 00:19:16.929 ], 00:19:16.929 "core_count": 1 00:19:16.929 } 00:19:16.929 23:56:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:16.929 [2024-12-05 23:56:49.536787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.929 [2024-12-05 23:56:49.536833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:16.929 [2024-12-05 23:56:49.536845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:16.929 [2024-12-05 23:56:49.536855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.929 [2024-12-05 23:56:49.536876] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.929 [2024-12-05 23:56:49.539492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.929 [2024-12-05 23:56:49.539521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:16.929 [2024-12-05 23:56:49.539534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:19:16.929 [2024-12-05 23:56:49.539543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.929 [2024-12-05 23:56:49.542341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.929 [2024-12-05 23:56:49.542375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:16.929 [2024-12-05 23:56:49.542389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.774 ms 00:19:16.929 [2024-12-05 23:56:49.542396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.189 [2024-12-05 23:56:49.736040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.189 [2024-12-05 23:56:49.736085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:17.189 [2024-12-05 23:56:49.736102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 193.623 ms 00:19:17.189 [2024-12-05 23:56:49.736112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.189 [2024-12-05 23:56:49.742313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.189 [2024-12-05 23:56:49.742345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:17.189 [2024-12-05 23:56:49.742358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.166 ms 00:19:17.189 [2024-12-05 23:56:49.742369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.189 [2024-12-05 23:56:49.766679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.189 [2024-12-05 23:56:49.766712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:17.189 [2024-12-05 23:56:49.766725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.254 ms 00:19:17.189 [2024-12-05 23:56:49.766733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.189 [2024-12-05 23:56:49.782511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.189 [2024-12-05 23:56:49.782547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:17.189 [2024-12-05 23:56:49.782559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.743 ms 00:19:17.189 [2024-12-05 23:56:49.782567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.189 [2024-12-05 23:56:49.782714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.189 [2024-12-05 23:56:49.782726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:17.189 [2024-12-05 23:56:49.782739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:17.189 [2024-12-05 23:56:49.782746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.189 [2024-12-05 23:56:49.806279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.189 [2024-12-05 23:56:49.806311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:17.189 [2024-12-05 23:56:49.806322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.516 ms 00:19:17.189 [2024-12-05 23:56:49.806330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.189 [2024-12-05 23:56:49.829357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.189 [2024-12-05 23:56:49.829388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:17.189 [2024-12-05 23:56:49.829400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.993 ms 00:19:17.189 [2024-12-05 23:56:49.829408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.189 [2024-12-05 23:56:49.851599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.189 [2024-12-05 23:56:49.851630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:17.189 [2024-12-05 23:56:49.851642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.155 ms 00:19:17.189 [2024-12-05 23:56:49.851648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.189 [2024-12-05 23:56:49.874796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.189 [2024-12-05 23:56:49.874828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:17.189 [2024-12-05 23:56:49.874842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.085 ms 00:19:17.189 [2024-12-05 23:56:49.874850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.189 [2024-12-05 23:56:49.874880] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:17.189 [2024-12-05 23:56:49.874894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:17.189 [2024-12-05 23:56:49.874905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:17.189 [2024-12-05 23:56:49.874914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.874923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.874931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.874939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.874947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.874956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.874964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.874982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.874989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.874999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:17.190 [2024-12-05 23:56:49.875793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:17.191 [2024-12-05 23:56:49.875800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:17.191 [2024-12-05 23:56:49.875811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:17.191 [2024-12-05 23:56:49.875819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:17.191 [2024-12-05 23:56:49.875828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:17.191 [2024-12-05 23:56:49.875835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:17.191 [2024-12-05 23:56:49.875846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:17.191 [2024-12-05 23:56:49.875853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:17.191 [2024-12-05 23:56:49.875862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:17.191 [2024-12-05 23:56:49.875878] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:17.191 [2024-12-05 23:56:49.875887] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b1abe5f-2098-44c5-9711-175b8986601d 00:19:17.191 [2024-12-05 23:56:49.875898] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:17.191 [2024-12-05 23:56:49.875906] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:17.191 [2024-12-05 23:56:49.875913] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:17.191 [2024-12-05 23:56:49.875922] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:17.191 [2024-12-05 23:56:49.875929] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:17.191 [2024-12-05 23:56:49.875938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:17.191 [2024-12-05 23:56:49.875945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:17.191 [2024-12-05 23:56:49.875954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:17.191 [2024-12-05 23:56:49.875960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:17.191 [2024-12-05 23:56:49.875981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.191 [2024-12-05 23:56:49.875989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:17.191 [2024-12-05 23:56:49.875999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:19:17.191 [2024-12-05 23:56:49.876006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.191 [2024-12-05 23:56:49.888192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.191 [2024-12-05 23:56:49.888222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:17.191 [2024-12-05 23:56:49.888234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.155 ms 00:19:17.191 [2024-12-05 23:56:49.888242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.191 [2024-12-05 23:56:49.888615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.191 [2024-12-05 23:56:49.888634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:17.191 [2024-12-05 23:56:49.888645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:19:17.191 [2024-12-05 23:56:49.888652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:49.923550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:49.923586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.449 [2024-12-05 23:56:49.923601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:49.923610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:49.923666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:49.923675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.449 [2024-12-05 23:56:49.923684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:49.923691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:49.923760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:49.923770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.449 [2024-12-05 23:56:49.923779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:49.923786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:49.923802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:49.923810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.449 [2024-12-05 23:56:49.923819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:49.923826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:49.999218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:49.999265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.449 [2024-12-05 23:56:49.999281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:49.999289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:50.061846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:50.061898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.449 [2024-12-05 23:56:50.061910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:50.061918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:50.062022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:50.062033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.449 [2024-12-05 23:56:50.062043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:50.062050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:50.062092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:50.062102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.449 [2024-12-05 23:56:50.062112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:50.062119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:50.062228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:50.062246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.449 [2024-12-05 23:56:50.062265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:50.062273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:50.062306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:50.062321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:17.449 [2024-12-05 23:56:50.062331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:50.062338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:50.062373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:50.062384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.449 [2024-12-05 23:56:50.062393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:50.062406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:50.062445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.449 [2024-12-05 23:56:50.062462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.449 [2024-12-05 23:56:50.062471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.449 [2024-12-05 23:56:50.062478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.449 [2024-12-05 23:56:50.062596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.773 ms, result 0 00:19:17.449 true 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75999 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75999 ']' 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75999 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75999 00:19:17.449 killing process with pid 75999 00:19:17.449 Received shutdown signal, test time was about 4.000000 seconds 00:19:17.449 00:19:17.449 Latency(us) 00:19:17.449 [2024-12-05T23:56:50.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.449 [2024-12-05T23:56:50.158Z] =================================================================================================================== 00:19:17.449 [2024-12-05T23:56:50.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75999' 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75999 00:19:17.449 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75999 00:19:18.386 23:56:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:18.386 23:56:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:18.386 Remove shared memory files 00:19:18.386 23:56:50 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:18.386 23:56:50 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:18.386 23:56:50 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:18.386 23:56:50 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:18.386 23:56:50 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:18.386 23:56:50 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:18.386 00:19:18.387 real 0m22.308s 00:19:18.387 user 0m24.986s 00:19:18.387 sys 0m0.919s 00:19:18.387 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.387 ************************************ 00:19:18.387 END TEST ftl_bdevperf 00:19:18.387 ************************************ 00:19:18.387 23:56:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:18.387 23:56:50 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:18.387 23:56:50 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:18.387 23:56:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.387 23:56:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:18.387 ************************************ 00:19:18.387 START TEST ftl_trim 00:19:18.387 ************************************ 00:19:18.387 23:56:50 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:18.387 * Looking for test storage... 00:19:18.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.387 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:18.387 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:18.387 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:18.649 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.649 23:56:51 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:18.649 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.649 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:18.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.649 --rc genhtml_branch_coverage=1 00:19:18.649 --rc genhtml_function_coverage=1 00:19:18.649 --rc genhtml_legend=1 00:19:18.649 --rc geninfo_all_blocks=1 00:19:18.649 --rc geninfo_unexecuted_blocks=1 00:19:18.649 00:19:18.649 ' 00:19:18.649 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:18.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.649 --rc genhtml_branch_coverage=1 00:19:18.649 --rc genhtml_function_coverage=1 00:19:18.649 --rc genhtml_legend=1 00:19:18.649 --rc geninfo_all_blocks=1 00:19:18.649 --rc geninfo_unexecuted_blocks=1 00:19:18.649 00:19:18.649 ' 00:19:18.649 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:18.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.649 --rc genhtml_branch_coverage=1 00:19:18.650 --rc genhtml_function_coverage=1 00:19:18.650 --rc genhtml_legend=1 00:19:18.650 --rc geninfo_all_blocks=1 00:19:18.650 --rc geninfo_unexecuted_blocks=1 00:19:18.650 00:19:18.650 ' 00:19:18.650 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:18.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.650 --rc genhtml_branch_coverage=1 00:19:18.650 --rc genhtml_function_coverage=1 00:19:18.650 --rc genhtml_legend=1 00:19:18.650 --rc geninfo_all_blocks=1 00:19:18.650 --rc geninfo_unexecuted_blocks=1 00:19:18.650 00:19:18.650 ' 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76345 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76345 00:19:18.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.650 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76345 ']' 00:19:18.650 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.650 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.650 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.650 23:56:51 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:18.650 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.650 23:56:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:18.650 [2024-12-05 23:56:51.212825] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:19:18.650 [2024-12-05 23:56:51.212958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76345 ] 00:19:18.912 [2024-12-05 23:56:51.374918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:18.912 [2024-12-05 23:56:51.509257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.912 [2024-12-05 23:56:51.509686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.912 [2024-12-05 23:56:51.509815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.855 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.855 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:19.855 23:56:52 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:19.855 23:56:52 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:19.855 23:56:52 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:19.855 23:56:52 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:19.855 23:56:52 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:19.855 23:56:52 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:19.855 23:56:52 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:19.855 23:56:52 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:19.855 23:56:52 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:19.855 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:19.855 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:19.855 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:19.855 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:19.855 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:20.114 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:20.114 { 00:19:20.114 "name": "nvme0n1", 00:19:20.114 "aliases": [ 00:19:20.114 "defbf44c-6d25-424e-a011-19cc13a3c372" 00:19:20.114 ], 00:19:20.114 "product_name": "NVMe disk", 00:19:20.114 "block_size": 4096, 00:19:20.114 "num_blocks": 1310720, 00:19:20.114 "uuid": "defbf44c-6d25-424e-a011-19cc13a3c372", 00:19:20.114 "numa_id": -1, 00:19:20.114 "assigned_rate_limits": { 00:19:20.114 "rw_ios_per_sec": 0, 00:19:20.114 "rw_mbytes_per_sec": 0, 00:19:20.114 "r_mbytes_per_sec": 0, 00:19:20.114 "w_mbytes_per_sec": 0 00:19:20.114 }, 00:19:20.114 "claimed": true, 00:19:20.114 "claim_type": "read_many_write_one", 00:19:20.115 "zoned": false, 00:19:20.115 "supported_io_types": { 00:19:20.115 "read": true, 00:19:20.115 "write": true, 00:19:20.115 "unmap": true, 00:19:20.115 "flush": true, 00:19:20.115 "reset": true, 00:19:20.115 "nvme_admin": true, 00:19:20.115 "nvme_io": true, 00:19:20.115 "nvme_io_md": false, 00:19:20.115 "write_zeroes": true, 00:19:20.115 "zcopy": false, 00:19:20.115 "get_zone_info": false, 00:19:20.115 "zone_management": false, 00:19:20.115 "zone_append": false, 00:19:20.115 "compare": true, 00:19:20.115 "compare_and_write": false, 00:19:20.115 "abort": true, 00:19:20.115 "seek_hole": false, 00:19:20.115 "seek_data": false, 00:19:20.115 "copy": true, 00:19:20.115 "nvme_iov_md": false 00:19:20.115 }, 00:19:20.115 "driver_specific": { 00:19:20.115 "nvme": [ 00:19:20.115 { 00:19:20.115 "pci_address": "0000:00:11.0", 00:19:20.115 "trid": { 00:19:20.115 "trtype": "PCIe", 00:19:20.115 "traddr": "0000:00:11.0" 00:19:20.115 }, 00:19:20.115 "ctrlr_data": { 00:19:20.115 "cntlid": 0, 00:19:20.115 "vendor_id": "0x1b36", 00:19:20.115 "model_number": "QEMU NVMe Ctrl", 00:19:20.115 "serial_number": "12341", 00:19:20.115 "firmware_revision": "8.0.0", 00:19:20.115 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:20.115 "oacs": { 00:19:20.115 "security": 0, 00:19:20.115 "format": 1, 00:19:20.115 "firmware": 0, 00:19:20.115 "ns_manage": 1 00:19:20.115 }, 00:19:20.115 "multi_ctrlr": false, 00:19:20.115 "ana_reporting": false 00:19:20.115 }, 00:19:20.115 "vs": { 00:19:20.115 "nvme_version": "1.4" 00:19:20.115 }, 00:19:20.115 "ns_data": { 00:19:20.115 "id": 1, 00:19:20.115 "can_share": false 00:19:20.115 } 00:19:20.115 } 00:19:20.115 ], 00:19:20.115 "mp_policy": "active_passive" 00:19:20.115 } 00:19:20.115 } 00:19:20.115 ]' 00:19:20.115 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:20.115 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:20.115 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:20.115 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:20.115 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:20.115 23:56:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:20.115 23:56:52 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:20.115 23:56:52 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:20.115 23:56:52 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:20.115 23:56:52 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:20.115 23:56:52 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:20.374 23:56:53 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f61e7c7b-f2da-43f2-8f58-a2fdc09c3f76 00:19:20.374 23:56:53 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:20.374 23:56:53 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f61e7c7b-f2da-43f2-8f58-a2fdc09c3f76 00:19:20.700 23:56:53 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:20.982 23:56:53 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=8fd10eea-cb27-4552-a559-222bb5fe7130 00:19:20.982 23:56:53 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8fd10eea-cb27-4552-a559-222bb5fe7130 00:19:20.982 23:56:53 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:20.982 23:56:53 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:20.982 23:56:53 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:20.982 23:56:53 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:20.982 23:56:53 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:20.982 23:56:53 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:20.982 23:56:53 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:20.982 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:20.982 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:20.982 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:20.982 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:20.982 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:21.240 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:21.240 { 00:19:21.240 "name": "6dcd2287-cb09-478f-be3e-c881a2d38343", 00:19:21.240 "aliases": [ 00:19:21.240 "lvs/nvme0n1p0" 00:19:21.240 ], 00:19:21.240 "product_name": "Logical Volume", 00:19:21.240 "block_size": 4096, 00:19:21.240 "num_blocks": 26476544, 00:19:21.240 "uuid": "6dcd2287-cb09-478f-be3e-c881a2d38343", 00:19:21.240 "assigned_rate_limits": { 00:19:21.240 "rw_ios_per_sec": 0, 00:19:21.240 "rw_mbytes_per_sec": 0, 00:19:21.240 "r_mbytes_per_sec": 0, 00:19:21.240 "w_mbytes_per_sec": 0 00:19:21.240 }, 00:19:21.240 "claimed": false, 00:19:21.240 "zoned": false, 00:19:21.240 "supported_io_types": { 00:19:21.240 "read": true, 00:19:21.240 "write": true, 00:19:21.240 "unmap": true, 00:19:21.240 "flush": false, 00:19:21.240 "reset": true, 00:19:21.240 "nvme_admin": false, 00:19:21.240 "nvme_io": false, 00:19:21.240 "nvme_io_md": false, 00:19:21.240 "write_zeroes": true, 00:19:21.240 "zcopy": false, 00:19:21.240 "get_zone_info": false, 00:19:21.240 "zone_management": false, 00:19:21.240 "zone_append": false, 00:19:21.240 "compare": false, 00:19:21.240 "compare_and_write": false, 00:19:21.240 "abort": false, 00:19:21.240 "seek_hole": true, 00:19:21.240 "seek_data": true, 00:19:21.240 "copy": false, 00:19:21.240 "nvme_iov_md": false 00:19:21.240 }, 00:19:21.240 "driver_specific": { 00:19:21.240 "lvol": { 00:19:21.240 "lvol_store_uuid": "8fd10eea-cb27-4552-a559-222bb5fe7130", 00:19:21.240 "base_bdev": "nvme0n1", 00:19:21.240 "thin_provision": true, 00:19:21.240 "num_allocated_clusters": 0, 00:19:21.240 "snapshot": false, 00:19:21.240 "clone": false, 00:19:21.240 "esnap_clone": false 00:19:21.240 } 00:19:21.240 } 00:19:21.240 } 00:19:21.240 ]' 00:19:21.240 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:21.240 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:21.240 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:21.240 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:21.240 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:21.240 23:56:53 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:21.240 23:56:53 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:21.240 23:56:53 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:21.240 23:56:53 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:21.498 23:56:54 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:21.498 23:56:54 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:21.498 23:56:54 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:21.498 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:21.498 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:21.498 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:21.498 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:21.498 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:21.757 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:21.757 { 00:19:21.757 "name": "6dcd2287-cb09-478f-be3e-c881a2d38343", 00:19:21.757 "aliases": [ 00:19:21.757 "lvs/nvme0n1p0" 00:19:21.757 ], 00:19:21.757 "product_name": "Logical Volume", 00:19:21.757 "block_size": 4096, 00:19:21.757 "num_blocks": 26476544, 00:19:21.757 "uuid": "6dcd2287-cb09-478f-be3e-c881a2d38343", 00:19:21.757 "assigned_rate_limits": { 00:19:21.757 "rw_ios_per_sec": 0, 00:19:21.757 "rw_mbytes_per_sec": 0, 00:19:21.757 "r_mbytes_per_sec": 0, 00:19:21.757 "w_mbytes_per_sec": 0 00:19:21.757 }, 00:19:21.757 "claimed": false, 00:19:21.757 "zoned": false, 00:19:21.757 "supported_io_types": { 00:19:21.757 "read": true, 00:19:21.757 "write": true, 00:19:21.757 "unmap": true, 00:19:21.757 "flush": false, 00:19:21.757 "reset": true, 00:19:21.757 "nvme_admin": false, 00:19:21.757 "nvme_io": false, 00:19:21.757 "nvme_io_md": false, 00:19:21.757 "write_zeroes": true, 00:19:21.757 "zcopy": false, 00:19:21.757 "get_zone_info": false, 00:19:21.757 "zone_management": false, 00:19:21.757 "zone_append": false, 00:19:21.757 "compare": false, 00:19:21.757 "compare_and_write": false, 00:19:21.757 "abort": false, 00:19:21.757 "seek_hole": true, 00:19:21.757 "seek_data": true, 00:19:21.757 "copy": false, 00:19:21.757 "nvme_iov_md": false 00:19:21.757 }, 00:19:21.757 "driver_specific": { 00:19:21.757 "lvol": { 00:19:21.757 "lvol_store_uuid": "8fd10eea-cb27-4552-a559-222bb5fe7130", 00:19:21.757 "base_bdev": "nvme0n1", 00:19:21.757 "thin_provision": true, 00:19:21.757 "num_allocated_clusters": 0, 00:19:21.757 "snapshot": false, 00:19:21.757 "clone": false, 00:19:21.757 "esnap_clone": false 00:19:21.757 } 00:19:21.757 } 00:19:21.757 } 00:19:21.757 ]' 00:19:21.757 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:21.757 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:21.757 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:21.757 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:21.757 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:21.757 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:21.757 23:56:54 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:21.757 23:56:54 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:22.015 23:56:54 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:22.015 23:56:54 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:22.015 23:56:54 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:22.015 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:22.015 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:22.015 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:22.015 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:22.015 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6dcd2287-cb09-478f-be3e-c881a2d38343 00:19:22.294 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:22.294 { 00:19:22.294 "name": "6dcd2287-cb09-478f-be3e-c881a2d38343", 00:19:22.294 "aliases": [ 00:19:22.294 "lvs/nvme0n1p0" 00:19:22.294 ], 00:19:22.294 "product_name": "Logical Volume", 00:19:22.294 "block_size": 4096, 00:19:22.294 "num_blocks": 26476544, 00:19:22.294 "uuid": "6dcd2287-cb09-478f-be3e-c881a2d38343", 00:19:22.294 "assigned_rate_limits": { 00:19:22.294 "rw_ios_per_sec": 0, 00:19:22.294 "rw_mbytes_per_sec": 0, 00:19:22.294 "r_mbytes_per_sec": 0, 00:19:22.294 "w_mbytes_per_sec": 0 00:19:22.294 }, 00:19:22.294 "claimed": false, 00:19:22.294 "zoned": false, 00:19:22.294 "supported_io_types": { 00:19:22.294 "read": true, 00:19:22.294 "write": true, 00:19:22.294 "unmap": true, 00:19:22.294 "flush": false, 00:19:22.294 "reset": true, 00:19:22.294 "nvme_admin": false, 00:19:22.294 "nvme_io": false, 00:19:22.294 "nvme_io_md": false, 00:19:22.294 "write_zeroes": true, 00:19:22.294 "zcopy": false, 00:19:22.294 "get_zone_info": false, 00:19:22.294 "zone_management": false, 00:19:22.294 "zone_append": false, 00:19:22.294 "compare": false, 00:19:22.294 "compare_and_write": false, 00:19:22.294 "abort": false, 00:19:22.294 "seek_hole": true, 00:19:22.294 "seek_data": true, 00:19:22.294 "copy": false, 00:19:22.294 "nvme_iov_md": false 00:19:22.294 }, 00:19:22.294 "driver_specific": { 00:19:22.294 "lvol": { 00:19:22.294 "lvol_store_uuid": "8fd10eea-cb27-4552-a559-222bb5fe7130", 00:19:22.294 "base_bdev": "nvme0n1", 00:19:22.294 "thin_provision": true, 00:19:22.294 "num_allocated_clusters": 0, 00:19:22.294 "snapshot": false, 00:19:22.294 "clone": false, 00:19:22.294 "esnap_clone": false 00:19:22.294 } 00:19:22.294 } 00:19:22.294 } 00:19:22.294 ]' 00:19:22.294 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:22.294 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:22.294 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:22.294 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:22.294 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:22.295 23:56:54 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:22.295 23:56:54 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:22.295 23:56:54 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6dcd2287-cb09-478f-be3e-c881a2d38343 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:22.553 [2024-12-05 23:56:55.078678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.078847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.553 [2024-12-05 23:56:55.078879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:22.553 [2024-12-05 23:56:55.078888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.081766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.081800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.553 [2024-12-05 23:56:55.081811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:19:22.553 [2024-12-05 23:56:55.081819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.081934] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.553 [2024-12-05 23:56:55.083684] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.553 [2024-12-05 23:56:55.083729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.083740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.553 [2024-12-05 23:56:55.083751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:19:22.553 [2024-12-05 23:56:55.083758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.083869] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cc09c2db-6e6c-420a-8d4b-8435768d4837 00:19:22.553 [2024-12-05 23:56:55.084978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.085008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:22.553 [2024-12-05 23:56:55.085017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:22.553 [2024-12-05 23:56:55.085027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.090285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.090317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.553 [2024-12-05 23:56:55.090327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.185 ms 00:19:22.553 [2024-12-05 23:56:55.090337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.090454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.090466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.553 [2024-12-05 23:56:55.090474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:22.553 [2024-12-05 23:56:55.090486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.090519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.090529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.553 [2024-12-05 23:56:55.090536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:22.553 [2024-12-05 23:56:55.090546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.090575] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:22.553 [2024-12-05 23:56:55.094170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.094201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.553 [2024-12-05 23:56:55.094213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:19:22.553 [2024-12-05 23:56:55.094221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.094281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.094303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.553 [2024-12-05 23:56:55.094312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:22.553 [2024-12-05 23:56:55.094319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.094345] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:22.553 [2024-12-05 23:56:55.094481] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.553 [2024-12-05 23:56:55.094496] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.553 [2024-12-05 23:56:55.094506] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.553 [2024-12-05 23:56:55.094518] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.553 [2024-12-05 23:56:55.094527] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.553 [2024-12-05 23:56:55.094536] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:22.553 [2024-12-05 23:56:55.094542] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.553 [2024-12-05 23:56:55.094552] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.553 [2024-12-05 23:56:55.094561] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.553 [2024-12-05 23:56:55.094570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.094577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.553 [2024-12-05 23:56:55.094586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:19:22.553 [2024-12-05 23:56:55.094593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.094701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.553 [2024-12-05 23:56:55.094710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.553 [2024-12-05 23:56:55.094720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:22.553 [2024-12-05 23:56:55.094726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.553 [2024-12-05 23:56:55.094842] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.553 [2024-12-05 23:56:55.094851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.553 [2024-12-05 23:56:55.094860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.553 [2024-12-05 23:56:55.094868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.553 [2024-12-05 23:56:55.094877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.553 [2024-12-05 23:56:55.094883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.553 [2024-12-05 23:56:55.094891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:22.553 [2024-12-05 23:56:55.094898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.553 [2024-12-05 23:56:55.094906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:22.553 [2024-12-05 23:56:55.094912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.553 [2024-12-05 23:56:55.094920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.553 [2024-12-05 23:56:55.094927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:22.553 [2024-12-05 23:56:55.094936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.553 [2024-12-05 23:56:55.094943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.553 [2024-12-05 23:56:55.094951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:22.553 [2024-12-05 23:56:55.094957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.553 [2024-12-05 23:56:55.094985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.553 [2024-12-05 23:56:55.094993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:22.553 [2024-12-05 23:56:55.095001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.553 [2024-12-05 23:56:55.095008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.553 [2024-12-05 23:56:55.095015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:22.553 [2024-12-05 23:56:55.095022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.553 [2024-12-05 23:56:55.095030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.553 [2024-12-05 23:56:55.095037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:22.553 [2024-12-05 23:56:55.095044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.553 [2024-12-05 23:56:55.095052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.553 [2024-12-05 23:56:55.095060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:22.553 [2024-12-05 23:56:55.095067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.553 [2024-12-05 23:56:55.095075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.553 [2024-12-05 23:56:55.095082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:22.553 [2024-12-05 23:56:55.095089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.553 [2024-12-05 23:56:55.095096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.553 [2024-12-05 23:56:55.095106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:22.553 [2024-12-05 23:56:55.095112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.553 [2024-12-05 23:56:55.095121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.553 [2024-12-05 23:56:55.095127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:22.553 [2024-12-05 23:56:55.095135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.553 [2024-12-05 23:56:55.095142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.553 [2024-12-05 23:56:55.095151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:22.553 [2024-12-05 23:56:55.095158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.553 [2024-12-05 23:56:55.095166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.553 [2024-12-05 23:56:55.095172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:22.553 [2024-12-05 23:56:55.095180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.553 [2024-12-05 23:56:55.095186] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.553 [2024-12-05 23:56:55.095195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.553 [2024-12-05 23:56:55.095202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.553 [2024-12-05 23:56:55.095211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.553 [2024-12-05 23:56:55.095218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.553 [2024-12-05 23:56:55.095228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.553 [2024-12-05 23:56:55.095235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.553 [2024-12-05 23:56:55.095243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.553 [2024-12-05 23:56:55.095249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.553 [2024-12-05 23:56:55.095257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.553 [2024-12-05 23:56:55.095265] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.553 [2024-12-05 23:56:55.095276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.553 [2024-12-05 23:56:55.095286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:22.553 [2024-12-05 23:56:55.095295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:22.553 [2024-12-05 23:56:55.095303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:22.553 [2024-12-05 23:56:55.095312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:22.553 [2024-12-05 23:56:55.095319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:22.553 [2024-12-05 23:56:55.095328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:22.553 [2024-12-05 23:56:55.095335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:22.553 [2024-12-05 23:56:55.095343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:22.553 [2024-12-05 23:56:55.095350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:22.553 [2024-12-05 23:56:55.095361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:22.553 [2024-12-05 23:56:55.095369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:22.553 [2024-12-05 23:56:55.095377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:22.553 [2024-12-05 23:56:55.095384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:22.553 [2024-12-05 23:56:55.095392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:22.553 [2024-12-05 23:56:55.095399] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.553 [2024-12-05 23:56:55.095411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.553 [2024-12-05 23:56:55.095419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.553 [2024-12-05 23:56:55.095428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.553 [2024-12-05 23:56:55.095435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.553 [2024-12-05 23:56:55.095444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.553 [2024-12-05 23:56:55.095452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.554 [2024-12-05 23:56:55.095460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.554 [2024-12-05 23:56:55.095467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:19:22.554 [2024-12-05 23:56:55.095475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.554 [2024-12-05 23:56:55.095542] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:22.554 [2024-12-05 23:56:55.095554] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:26.752 [2024-12-05 23:56:58.950811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:58.951081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:26.752 [2024-12-05 23:56:58.951104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3855.254 ms 00:19:26.752 [2024-12-05 23:56:58.951115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:58.977589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:58.977637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.752 [2024-12-05 23:56:58.977650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.233 ms 00:19:26.752 [2024-12-05 23:56:58.977660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:58.977802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:58.977815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:26.752 [2024-12-05 23:56:58.977839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:26.752 [2024-12-05 23:56:58.977850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.021421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.021469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.752 [2024-12-05 23:56:59.021482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.538 ms 00:19:26.752 [2024-12-05 23:56:59.021493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.021575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.021588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.752 [2024-12-05 23:56:59.021597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:26.752 [2024-12-05 23:56:59.021607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.022006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.022030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.752 [2024-12-05 23:56:59.022039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:19:26.752 [2024-12-05 23:56:59.022049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.022164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.022174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.752 [2024-12-05 23:56:59.022196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:26.752 [2024-12-05 23:56:59.022207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.037480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.037516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.752 [2024-12-05 23:56:59.037526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.247 ms 00:19:26.752 [2024-12-05 23:56:59.037536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.049323] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:26.752 [2024-12-05 23:56:59.065366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.065403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:26.752 [2024-12-05 23:56:59.065417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.740 ms 00:19:26.752 [2024-12-05 23:56:59.065426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.167457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.167641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:26.752 [2024-12-05 23:56:59.167667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.957 ms 00:19:26.752 [2024-12-05 23:56:59.167677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.167905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.167918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.752 [2024-12-05 23:56:59.167932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:19:26.752 [2024-12-05 23:56:59.167940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.193605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.193649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:26.752 [2024-12-05 23:56:59.193664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.608 ms 00:19:26.752 [2024-12-05 23:56:59.193675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.218381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.218423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:26.752 [2024-12-05 23:56:59.218438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.616 ms 00:19:26.752 [2024-12-05 23:56:59.218446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.219087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.219109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:26.752 [2024-12-05 23:56:59.219122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:19:26.752 [2024-12-05 23:56:59.219129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.307973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.308032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:26.752 [2024-12-05 23:56:59.308052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.776 ms 00:19:26.752 [2024-12-05 23:56:59.308061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.336819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.336870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:26.752 [2024-12-05 23:56:59.336886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.626 ms 00:19:26.752 [2024-12-05 23:56:59.336898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.363803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.363857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:26.752 [2024-12-05 23:56:59.363874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.774 ms 00:19:26.752 [2024-12-05 23:56:59.363881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.391712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.391938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.752 [2024-12-05 23:56:59.391987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.703 ms 00:19:26.752 [2024-12-05 23:56:59.391997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.392203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.392228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.752 [2024-12-05 23:56:59.392245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:26.752 [2024-12-05 23:56:59.392267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.392366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.752 [2024-12-05 23:56:59.392375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.752 [2024-12-05 23:56:59.392386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:26.752 [2024-12-05 23:56:59.392395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.752 [2024-12-05 23:56:59.393494] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:26.752 [2024-12-05 23:56:59.397092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4314.470 ms, result 0 00:19:26.752 [2024-12-05 23:56:59.398487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:19:26.752 "name": "ftl0", 00:19:26.752 "uuid": "cc09c2db-6e6c-420a-8d4b-8435768d4837" 00:19:26.752 } 00:19:26.753 p_thread 00:19:26.753 23:56:59 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:26.753 23:56:59 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:26.753 23:56:59 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:26.753 23:56:59 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:26.753 23:56:59 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:26.753 23:56:59 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:26.753 23:56:59 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:27.010 23:56:59 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:27.268 [ 00:19:27.268 { 00:19:27.268 "name": "ftl0", 00:19:27.268 "aliases": [ 00:19:27.268 "cc09c2db-6e6c-420a-8d4b-8435768d4837" 00:19:27.268 ], 00:19:27.268 "product_name": "FTL disk", 00:19:27.268 "block_size": 4096, 00:19:27.268 "num_blocks": 23592960, 00:19:27.268 "uuid": "cc09c2db-6e6c-420a-8d4b-8435768d4837", 00:19:27.269 "assigned_rate_limits": { 00:19:27.269 "rw_ios_per_sec": 0, 00:19:27.269 "rw_mbytes_per_sec": 0, 00:19:27.269 "r_mbytes_per_sec": 0, 00:19:27.269 "w_mbytes_per_sec": 0 00:19:27.269 }, 00:19:27.269 "claimed": false, 00:19:27.269 "zoned": false, 00:19:27.269 "supported_io_types": { 00:19:27.269 "read": true, 00:19:27.269 "write": true, 00:19:27.269 "unmap": true, 00:19:27.269 "flush": true, 00:19:27.269 "reset": false, 00:19:27.269 "nvme_admin": false, 00:19:27.269 "nvme_io": false, 00:19:27.269 "nvme_io_md": false, 00:19:27.269 "write_zeroes": true, 00:19:27.269 "zcopy": false, 00:19:27.269 "get_zone_info": false, 00:19:27.269 "zone_management": false, 00:19:27.269 "zone_append": false, 00:19:27.269 "compare": false, 00:19:27.269 "compare_and_write": false, 00:19:27.269 "abort": false, 00:19:27.269 "seek_hole": false, 00:19:27.269 "seek_data": false, 00:19:27.269 "copy": false, 00:19:27.269 "nvme_iov_md": false 00:19:27.269 }, 00:19:27.269 "driver_specific": { 00:19:27.269 "ftl": { 00:19:27.269 "base_bdev": "6dcd2287-cb09-478f-be3e-c881a2d38343", 00:19:27.269 "cache": "nvc0n1p0" 00:19:27.269 } 00:19:27.269 } 00:19:27.269 } 00:19:27.269 ] 00:19:27.269 23:56:59 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:27.269 23:56:59 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:27.269 23:56:59 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:27.527 23:57:00 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:27.527 23:57:00 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:27.527 23:57:00 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:27.527 { 00:19:27.527 "name": "ftl0", 00:19:27.527 "aliases": [ 00:19:27.527 "cc09c2db-6e6c-420a-8d4b-8435768d4837" 00:19:27.527 ], 00:19:27.527 "product_name": "FTL disk", 00:19:27.527 "block_size": 4096, 00:19:27.527 "num_blocks": 23592960, 00:19:27.527 "uuid": "cc09c2db-6e6c-420a-8d4b-8435768d4837", 00:19:27.527 "assigned_rate_limits": { 00:19:27.527 "rw_ios_per_sec": 0, 00:19:27.527 "rw_mbytes_per_sec": 0, 00:19:27.527 "r_mbytes_per_sec": 0, 00:19:27.527 "w_mbytes_per_sec": 0 00:19:27.527 }, 00:19:27.527 "claimed": false, 00:19:27.527 "zoned": false, 00:19:27.527 "supported_io_types": { 00:19:27.527 "read": true, 00:19:27.527 "write": true, 00:19:27.527 "unmap": true, 00:19:27.527 "flush": true, 00:19:27.527 "reset": false, 00:19:27.527 "nvme_admin": false, 00:19:27.527 "nvme_io": false, 00:19:27.527 "nvme_io_md": false, 00:19:27.527 "write_zeroes": true, 00:19:27.527 "zcopy": false, 00:19:27.527 "get_zone_info": false, 00:19:27.527 "zone_management": false, 00:19:27.527 "zone_append": false, 00:19:27.527 "compare": false, 00:19:27.527 "compare_and_write": false, 00:19:27.527 "abort": false, 00:19:27.527 "seek_hole": false, 00:19:27.527 "seek_data": false, 00:19:27.527 "copy": false, 00:19:27.527 "nvme_iov_md": false 00:19:27.527 }, 00:19:27.527 "driver_specific": { 00:19:27.527 "ftl": { 00:19:27.527 "base_bdev": "6dcd2287-cb09-478f-be3e-c881a2d38343", 00:19:27.527 "cache": "nvc0n1p0" 00:19:27.527 } 00:19:27.527 } 00:19:27.527 } 00:19:27.527 ]' 00:19:27.527 23:57:00 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:27.785 23:57:00 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:27.785 23:57:00 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:27.785 [2024-12-05 23:57:00.438516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.785 [2024-12-05 23:57:00.438562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:27.785 [2024-12-05 23:57:00.438577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:27.785 [2024-12-05 23:57:00.438587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.785 [2024-12-05 23:57:00.438620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:27.785 [2024-12-05 23:57:00.441253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.785 [2024-12-05 23:57:00.441376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:27.785 [2024-12-05 23:57:00.441402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:19:27.785 [2024-12-05 23:57:00.441410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.785 [2024-12-05 23:57:00.441844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.785 [2024-12-05 23:57:00.441861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:27.785 [2024-12-05 23:57:00.441872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:19:27.785 [2024-12-05 23:57:00.441880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.785 [2024-12-05 23:57:00.445539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.785 [2024-12-05 23:57:00.445560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:27.785 [2024-12-05 23:57:00.445571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.632 ms 00:19:27.785 [2024-12-05 23:57:00.445580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.785 [2024-12-05 23:57:00.452822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.785 [2024-12-05 23:57:00.452929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:27.785 [2024-12-05 23:57:00.452948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.201 ms 00:19:27.785 [2024-12-05 23:57:00.452957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.785 [2024-12-05 23:57:00.477238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.785 [2024-12-05 23:57:00.477272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:27.785 [2024-12-05 23:57:00.477287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.192 ms 00:19:27.785 [2024-12-05 23:57:00.477295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.045 [2024-12-05 23:57:00.493556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.045 [2024-12-05 23:57:00.493592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.045 [2024-12-05 23:57:00.493608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.199 ms 00:19:28.045 [2024-12-05 23:57:00.493617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.045 [2024-12-05 23:57:00.493814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.045 [2024-12-05 23:57:00.493824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.045 [2024-12-05 23:57:00.493835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:28.045 [2024-12-05 23:57:00.493842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.045 [2024-12-05 23:57:00.517834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.045 [2024-12-05 23:57:00.517867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.045 [2024-12-05 23:57:00.517878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.960 ms 00:19:28.045 [2024-12-05 23:57:00.517886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.045 [2024-12-05 23:57:00.540794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.045 [2024-12-05 23:57:00.540826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.046 [2024-12-05 23:57:00.540840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.851 ms 00:19:28.046 [2024-12-05 23:57:00.540848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.046 [2024-12-05 23:57:00.563769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.046 [2024-12-05 23:57:00.563892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.046 [2024-12-05 23:57:00.563917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.863 ms 00:19:28.046 [2024-12-05 23:57:00.563927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.046 [2024-12-05 23:57:00.586929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.046 [2024-12-05 23:57:00.586961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.046 [2024-12-05 23:57:00.586988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.892 ms 00:19:28.046 [2024-12-05 23:57:00.586996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.046 [2024-12-05 23:57:00.587054] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.046 [2024-12-05 23:57:00.587068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.046 [2024-12-05 23:57:00.587758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.047 [2024-12-05 23:57:00.587953] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.047 [2024-12-05 23:57:00.587964] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc09c2db-6e6c-420a-8d4b-8435768d4837 00:19:28.047 [2024-12-05 23:57:00.587986] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.047 [2024-12-05 23:57:00.587995] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.047 [2024-12-05 23:57:00.588004] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.047 [2024-12-05 23:57:00.588013] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.047 [2024-12-05 23:57:00.588020] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.047 [2024-12-05 23:57:00.588029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.047 [2024-12-05 23:57:00.588037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.047 [2024-12-05 23:57:00.588045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.047 [2024-12-05 23:57:00.588051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.047 [2024-12-05 23:57:00.588060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.047 [2024-12-05 23:57:00.588067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.047 [2024-12-05 23:57:00.588077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:19:28.047 [2024-12-05 23:57:00.588084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.047 [2024-12-05 23:57:00.600898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.047 [2024-12-05 23:57:00.600928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.047 [2024-12-05 23:57:00.600944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.774 ms 00:19:28.047 [2024-12-05 23:57:00.600951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.047 [2024-12-05 23:57:00.601354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.047 [2024-12-05 23:57:00.601375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.047 [2024-12-05 23:57:00.601386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:28.047 [2024-12-05 23:57:00.601393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.047 [2024-12-05 23:57:00.645210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.047 [2024-12-05 23:57:00.645248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.047 [2024-12-05 23:57:00.645261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.047 [2024-12-05 23:57:00.645270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.047 [2024-12-05 23:57:00.645376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.047 [2024-12-05 23:57:00.645386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.047 [2024-12-05 23:57:00.645397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.047 [2024-12-05 23:57:00.645406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.047 [2024-12-05 23:57:00.645469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.047 [2024-12-05 23:57:00.645482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.047 [2024-12-05 23:57:00.645494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.047 [2024-12-05 23:57:00.645502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.047 [2024-12-05 23:57:00.645532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.047 [2024-12-05 23:57:00.645541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.047 [2024-12-05 23:57:00.645551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.047 [2024-12-05 23:57:00.645559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.047 [2024-12-05 23:57:00.727279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.047 [2024-12-05 23:57:00.727325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.047 [2024-12-05 23:57:00.727339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.047 [2024-12-05 23:57:00.727348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.306 [2024-12-05 23:57:00.790838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.306 [2024-12-05 23:57:00.790884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.306 [2024-12-05 23:57:00.790898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.306 [2024-12-05 23:57:00.790905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.306 [2024-12-05 23:57:00.791020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.306 [2024-12-05 23:57:00.791030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.306 [2024-12-05 23:57:00.791044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.306 [2024-12-05 23:57:00.791052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.306 [2024-12-05 23:57:00.791096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.306 [2024-12-05 23:57:00.791105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.306 [2024-12-05 23:57:00.791114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.306 [2024-12-05 23:57:00.791121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.306 [2024-12-05 23:57:00.791225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.306 [2024-12-05 23:57:00.791234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.306 [2024-12-05 23:57:00.791244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.306 [2024-12-05 23:57:00.791253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.306 [2024-12-05 23:57:00.791300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.306 [2024-12-05 23:57:00.791309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.306 [2024-12-05 23:57:00.791319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.306 [2024-12-05 23:57:00.791326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.306 [2024-12-05 23:57:00.791374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.306 [2024-12-05 23:57:00.791383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.306 [2024-12-05 23:57:00.791394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.306 [2024-12-05 23:57:00.791403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.306 [2024-12-05 23:57:00.791451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.306 [2024-12-05 23:57:00.791460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.306 [2024-12-05 23:57:00.791469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.306 [2024-12-05 23:57:00.791477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.306 [2024-12-05 23:57:00.791643] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.107 ms, result 0 00:19:28.306 true 00:19:28.306 23:57:00 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76345 00:19:28.306 23:57:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76345 ']' 00:19:28.306 23:57:00 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76345 00:19:28.306 23:57:00 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:28.306 23:57:00 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.306 23:57:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76345 00:19:28.306 killing process with pid 76345 00:19:28.306 23:57:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.306 23:57:00 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.306 23:57:00 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76345' 00:19:28.306 23:57:00 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76345 00:19:28.306 23:57:00 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76345 00:19:34.903 23:57:06 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:35.165 65536+0 records in 00:19:35.165 65536+0 records out 00:19:35.165 268435456 bytes (268 MB, 256 MiB) copied, 1.07514 s, 250 MB/s 00:19:35.165 23:57:07 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:35.425 [2024-12-05 23:57:07.944188] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:19:35.425 [2024-12-05 23:57:07.944373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76545 ] 00:19:35.425 [2024-12-05 23:57:08.107513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.683 [2024-12-05 23:57:08.204829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.942 [2024-12-05 23:57:08.463072] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.942 [2024-12-05 23:57:08.463137] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.942 [2024-12-05 23:57:08.621820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.942 [2024-12-05 23:57:08.621866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.942 [2024-12-05 23:57:08.621880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:35.942 [2024-12-05 23:57:08.621890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.942 [2024-12-05 23:57:08.624549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.942 [2024-12-05 23:57:08.624582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.942 [2024-12-05 23:57:08.624592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.640 ms 00:19:35.942 [2024-12-05 23:57:08.624599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.942 [2024-12-05 23:57:08.624671] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.942 [2024-12-05 23:57:08.625368] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.942 [2024-12-05 23:57:08.625388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.942 [2024-12-05 23:57:08.625396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.942 [2024-12-05 23:57:08.625404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:19:35.942 [2024-12-05 23:57:08.625411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.942 [2024-12-05 23:57:08.627076] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:35.942 [2024-12-05 23:57:08.639784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.942 [2024-12-05 23:57:08.639827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:35.942 [2024-12-05 23:57:08.639840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.711 ms 00:19:35.942 [2024-12-05 23:57:08.639847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.942 [2024-12-05 23:57:08.639937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.942 [2024-12-05 23:57:08.639948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:35.942 [2024-12-05 23:57:08.639957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:35.942 [2024-12-05 23:57:08.639983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.942 [2024-12-05 23:57:08.644854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.942 [2024-12-05 23:57:08.644885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.942 [2024-12-05 23:57:08.644894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.828 ms 00:19:35.942 [2024-12-05 23:57:08.644901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.942 [2024-12-05 23:57:08.645004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.942 [2024-12-05 23:57:08.645015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.942 [2024-12-05 23:57:08.645023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:35.942 [2024-12-05 23:57:08.645030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.942 [2024-12-05 23:57:08.645057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.942 [2024-12-05 23:57:08.645065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.942 [2024-12-05 23:57:08.645073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:35.942 [2024-12-05 23:57:08.645080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.942 [2024-12-05 23:57:08.645101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:35.942 [2024-12-05 23:57:08.648275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.202 [2024-12-05 23:57:08.648401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.202 [2024-12-05 23:57:08.648416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:19:36.202 [2024-12-05 23:57:08.648423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.202 [2024-12-05 23:57:08.648463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.202 [2024-12-05 23:57:08.648472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:36.202 [2024-12-05 23:57:08.648480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:36.202 [2024-12-05 23:57:08.648487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.202 [2024-12-05 23:57:08.648506] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:36.202 [2024-12-05 23:57:08.648525] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:36.202 [2024-12-05 23:57:08.648558] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:36.202 [2024-12-05 23:57:08.648573] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:36.202 [2024-12-05 23:57:08.648675] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:36.202 [2024-12-05 23:57:08.648685] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:36.202 [2024-12-05 23:57:08.648695] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:36.202 [2024-12-05 23:57:08.648707] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:36.202 [2024-12-05 23:57:08.648716] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:36.202 [2024-12-05 23:57:08.648724] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:36.202 [2024-12-05 23:57:08.648731] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:36.202 [2024-12-05 23:57:08.648738] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:36.202 [2024-12-05 23:57:08.648745] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:36.202 [2024-12-05 23:57:08.648753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.202 [2024-12-05 23:57:08.648760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:36.202 [2024-12-05 23:57:08.648767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:36.202 [2024-12-05 23:57:08.648774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.202 [2024-12-05 23:57:08.648860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.202 [2024-12-05 23:57:08.648871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:36.202 [2024-12-05 23:57:08.648878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:36.202 [2024-12-05 23:57:08.648884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.202 [2024-12-05 23:57:08.649002] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:36.202 [2024-12-05 23:57:08.649014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:36.202 [2024-12-05 23:57:08.649021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.202 [2024-12-05 23:57:08.649028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.202 [2024-12-05 23:57:08.649036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:36.202 [2024-12-05 23:57:08.649042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:36.202 [2024-12-05 23:57:08.649049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:36.202 [2024-12-05 23:57:08.649056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:36.202 [2024-12-05 23:57:08.649063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:36.202 [2024-12-05 23:57:08.649069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.202 [2024-12-05 23:57:08.649078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:36.202 [2024-12-05 23:57:08.649091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:36.202 [2024-12-05 23:57:08.649097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.202 [2024-12-05 23:57:08.649104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:36.202 [2024-12-05 23:57:08.649111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:36.202 [2024-12-05 23:57:08.649117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.202 [2024-12-05 23:57:08.649123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:36.202 [2024-12-05 23:57:08.649130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:36.202 [2024-12-05 23:57:08.649136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.202 [2024-12-05 23:57:08.649143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:36.203 [2024-12-05 23:57:08.649150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:36.203 [2024-12-05 23:57:08.649156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.203 [2024-12-05 23:57:08.649162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:36.203 [2024-12-05 23:57:08.649169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:36.203 [2024-12-05 23:57:08.649175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.203 [2024-12-05 23:57:08.649181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:36.203 [2024-12-05 23:57:08.649188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:36.203 [2024-12-05 23:57:08.649194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.203 [2024-12-05 23:57:08.649201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:36.203 [2024-12-05 23:57:08.649207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:36.203 [2024-12-05 23:57:08.649214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.203 [2024-12-05 23:57:08.649220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:36.203 [2024-12-05 23:57:08.649226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:36.203 [2024-12-05 23:57:08.649232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.203 [2024-12-05 23:57:08.649239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:36.203 [2024-12-05 23:57:08.649245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:36.203 [2024-12-05 23:57:08.649251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.203 [2024-12-05 23:57:08.649257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:36.203 [2024-12-05 23:57:08.649264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:36.203 [2024-12-05 23:57:08.649270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.203 [2024-12-05 23:57:08.649277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:36.203 [2024-12-05 23:57:08.649283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:36.203 [2024-12-05 23:57:08.649290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.203 [2024-12-05 23:57:08.649298] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:36.203 [2024-12-05 23:57:08.649305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:36.203 [2024-12-05 23:57:08.649315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.203 [2024-12-05 23:57:08.649322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.203 [2024-12-05 23:57:08.649329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:36.203 [2024-12-05 23:57:08.649336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:36.203 [2024-12-05 23:57:08.649342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:36.203 [2024-12-05 23:57:08.649349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:36.203 [2024-12-05 23:57:08.649355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:36.203 [2024-12-05 23:57:08.649361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:36.203 [2024-12-05 23:57:08.649369] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:36.203 [2024-12-05 23:57:08.649377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.203 [2024-12-05 23:57:08.649385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:36.203 [2024-12-05 23:57:08.649392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:36.203 [2024-12-05 23:57:08.649399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:36.203 [2024-12-05 23:57:08.649406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:36.203 [2024-12-05 23:57:08.649413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:36.203 [2024-12-05 23:57:08.649420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:36.203 [2024-12-05 23:57:08.649426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:36.203 [2024-12-05 23:57:08.649433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:36.203 [2024-12-05 23:57:08.649440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:36.203 [2024-12-05 23:57:08.649447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:36.203 [2024-12-05 23:57:08.649453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:36.203 [2024-12-05 23:57:08.649460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:36.203 [2024-12-05 23:57:08.649467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:36.203 [2024-12-05 23:57:08.649474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:36.203 [2024-12-05 23:57:08.649481] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:36.203 [2024-12-05 23:57:08.649490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.203 [2024-12-05 23:57:08.649498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:36.203 [2024-12-05 23:57:08.649505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:36.203 [2024-12-05 23:57:08.649512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:36.203 [2024-12-05 23:57:08.649519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:36.203 [2024-12-05 23:57:08.649527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.649536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:36.203 [2024-12-05 23:57:08.649543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:19:36.203 [2024-12-05 23:57:08.649551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.203 [2024-12-05 23:57:08.675145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.675274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.203 [2024-12-05 23:57:08.675290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.517 ms 00:19:36.203 [2024-12-05 23:57:08.675297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.203 [2024-12-05 23:57:08.675420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.675430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:36.203 [2024-12-05 23:57:08.675438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:36.203 [2024-12-05 23:57:08.675446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.203 [2024-12-05 23:57:08.721563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.721601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.203 [2024-12-05 23:57:08.721616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.097 ms 00:19:36.203 [2024-12-05 23:57:08.721624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.203 [2024-12-05 23:57:08.721710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.721722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.203 [2024-12-05 23:57:08.721730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:36.203 [2024-12-05 23:57:08.721738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.203 [2024-12-05 23:57:08.722078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.722094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.203 [2024-12-05 23:57:08.722108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:19:36.203 [2024-12-05 23:57:08.722116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.203 [2024-12-05 23:57:08.722240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.722256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.203 [2024-12-05 23:57:08.722264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:36.203 [2024-12-05 23:57:08.722271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.203 [2024-12-05 23:57:08.735628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.735762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.203 [2024-12-05 23:57:08.735778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.338 ms 00:19:36.203 [2024-12-05 23:57:08.735785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.203 [2024-12-05 23:57:08.748612] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:36.203 [2024-12-05 23:57:08.748647] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:36.203 [2024-12-05 23:57:08.748660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.748668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:36.203 [2024-12-05 23:57:08.748676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.780 ms 00:19:36.203 [2024-12-05 23:57:08.748683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.203 [2024-12-05 23:57:08.772840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.772872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:36.203 [2024-12-05 23:57:08.772883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.087 ms 00:19:36.203 [2024-12-05 23:57:08.772891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.203 [2024-12-05 23:57:08.784449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.203 [2024-12-05 23:57:08.784479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:36.203 [2024-12-05 23:57:08.784489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.492 ms 00:19:36.203 [2024-12-05 23:57:08.784495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.795822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.204 [2024-12-05 23:57:08.795852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:36.204 [2024-12-05 23:57:08.795862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.268 ms 00:19:36.204 [2024-12-05 23:57:08.795870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.796507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.204 [2024-12-05 23:57:08.796531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:36.204 [2024-12-05 23:57:08.796540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:19:36.204 [2024-12-05 23:57:08.796548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.853825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.204 [2024-12-05 23:57:08.853870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:36.204 [2024-12-05 23:57:08.853884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.255 ms 00:19:36.204 [2024-12-05 23:57:08.853892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.864472] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:36.204 [2024-12-05 23:57:08.878621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.204 [2024-12-05 23:57:08.878657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:36.204 [2024-12-05 23:57:08.878669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.632 ms 00:19:36.204 [2024-12-05 23:57:08.878677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.878758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.204 [2024-12-05 23:57:08.878768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:36.204 [2024-12-05 23:57:08.878777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:36.204 [2024-12-05 23:57:08.878784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.878829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.204 [2024-12-05 23:57:08.878837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:36.204 [2024-12-05 23:57:08.878846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:36.204 [2024-12-05 23:57:08.878853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.878884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.204 [2024-12-05 23:57:08.878895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:36.204 [2024-12-05 23:57:08.878902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:36.204 [2024-12-05 23:57:08.878909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.878938] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:36.204 [2024-12-05 23:57:08.878948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.204 [2024-12-05 23:57:08.878956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:36.204 [2024-12-05 23:57:08.878963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:36.204 [2024-12-05 23:57:08.878997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.903030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.204 [2024-12-05 23:57:08.903064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:36.204 [2024-12-05 23:57:08.903075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.010 ms 00:19:36.204 [2024-12-05 23:57:08.903083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.903165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.204 [2024-12-05 23:57:08.903175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:36.204 [2024-12-05 23:57:08.903184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:36.204 [2024-12-05 23:57:08.903191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.204 [2024-12-05 23:57:08.903920] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.204 [2024-12-05 23:57:08.907102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 281.835 ms, result 0 00:19:36.204 [2024-12-05 23:57:08.908208] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.462 [2024-12-05 23:57:08.920931] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.419  [2024-12-05T23:57:11.067Z] Copying: 20/256 [MB] (20 MBps) [2024-12-05T23:57:12.006Z] Copying: 36/256 [MB] (15 MBps) [2024-12-05T23:57:12.947Z] Copying: 54/256 [MB] (17 MBps) [2024-12-05T23:57:14.336Z] Copying: 68/256 [MB] (14 MBps) [2024-12-05T23:57:15.280Z] Copying: 85/256 [MB] (16 MBps) [2024-12-05T23:57:16.218Z] Copying: 116/256 [MB] (31 MBps) [2024-12-05T23:57:17.162Z] Copying: 149/256 [MB] (32 MBps) [2024-12-05T23:57:18.104Z] Copying: 163/256 [MB] (13 MBps) [2024-12-05T23:57:19.047Z] Copying: 175/256 [MB] (12 MBps) [2024-12-05T23:57:19.992Z] Copying: 190/256 [MB] (14 MBps) [2024-12-05T23:57:20.934Z] Copying: 206/256 [MB] (16 MBps) [2024-12-05T23:57:22.318Z] Copying: 223/256 [MB] (16 MBps) [2024-12-05T23:57:23.265Z] Copying: 237/256 [MB] (14 MBps) [2024-12-05T23:57:23.265Z] Copying: 254/256 [MB] (16 MBps) [2024-12-05T23:57:23.265Z] Copying: 256/256 [MB] (average 18 MBps)[2024-12-05 23:57:23.011749] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.556 [2024-12-05 23:57:23.022095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.022293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:50.556 [2024-12-05 23:57:23.022318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:50.556 [2024-12-05 23:57:23.022337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.022369] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:50.556 [2024-12-05 23:57:23.025340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.025501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:50.556 [2024-12-05 23:57:23.025521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:19:50.556 [2024-12-05 23:57:23.025529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.028674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.028827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:50.556 [2024-12-05 23:57:23.028845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:19:50.556 [2024-12-05 23:57:23.028854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.037416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.037467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:50.556 [2024-12-05 23:57:23.037478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.539 ms 00:19:50.556 [2024-12-05 23:57:23.037486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.044464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.044502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:50.556 [2024-12-05 23:57:23.044513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.932 ms 00:19:50.556 [2024-12-05 23:57:23.044521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.070774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.070822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:50.556 [2024-12-05 23:57:23.070835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.190 ms 00:19:50.556 [2024-12-05 23:57:23.070844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.087414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.087481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.556 [2024-12-05 23:57:23.087497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.507 ms 00:19:50.556 [2024-12-05 23:57:23.087505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.087660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.087672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.556 [2024-12-05 23:57:23.087682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:50.556 [2024-12-05 23:57:23.087698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.113289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.113334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:50.556 [2024-12-05 23:57:23.113347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.573 ms 00:19:50.556 [2024-12-05 23:57:23.113355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.138728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.138907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:50.556 [2024-12-05 23:57:23.138927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.313 ms 00:19:50.556 [2024-12-05 23:57:23.138935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.163409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.163453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.556 [2024-12-05 23:57:23.163466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.406 ms 00:19:50.556 [2024-12-05 23:57:23.163473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.187869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.556 [2024-12-05 23:57:23.187924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.556 [2024-12-05 23:57:23.187935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.301 ms 00:19:50.556 [2024-12-05 23:57:23.187942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.556 [2024-12-05 23:57:23.188005] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.556 [2024-12-05 23:57:23.188024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.556 [2024-12-05 23:57:23.188035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.556 [2024-12-05 23:57:23.188044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.556 [2024-12-05 23:57:23.188052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.556 [2024-12-05 23:57:23.188060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.556 [2024-12-05 23:57:23.188069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.556 [2024-12-05 23:57:23.188076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.557 [2024-12-05 23:57:23.188769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.558 [2024-12-05 23:57:23.188777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.558 [2024-12-05 23:57:23.188785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.558 [2024-12-05 23:57:23.188793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.558 [2024-12-05 23:57:23.188801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.558 [2024-12-05 23:57:23.188808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.558 [2024-12-05 23:57:23.188825] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.558 [2024-12-05 23:57:23.188834] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc09c2db-6e6c-420a-8d4b-8435768d4837 00:19:50.558 [2024-12-05 23:57:23.188843] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.558 [2024-12-05 23:57:23.188851] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.558 [2024-12-05 23:57:23.188859] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.558 [2024-12-05 23:57:23.188867] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.558 [2024-12-05 23:57:23.188874] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.558 [2024-12-05 23:57:23.188882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.558 [2024-12-05 23:57:23.188889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.558 [2024-12-05 23:57:23.188896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.558 [2024-12-05 23:57:23.188903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.558 [2024-12-05 23:57:23.188912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.558 [2024-12-05 23:57:23.188922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.558 [2024-12-05 23:57:23.188931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:19:50.558 [2024-12-05 23:57:23.188938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.558 [2024-12-05 23:57:23.202435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.558 [2024-12-05 23:57:23.202476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.558 [2024-12-05 23:57:23.202487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.452 ms 00:19:50.558 [2024-12-05 23:57:23.202495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.558 [2024-12-05 23:57:23.202896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.558 [2024-12-05 23:57:23.202907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.558 [2024-12-05 23:57:23.202916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:19:50.558 [2024-12-05 23:57:23.202924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.558 [2024-12-05 23:57:23.241797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.558 [2024-12-05 23:57:23.242002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.558 [2024-12-05 23:57:23.242022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.558 [2024-12-05 23:57:23.242031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.558 [2024-12-05 23:57:23.242128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.558 [2024-12-05 23:57:23.242137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.558 [2024-12-05 23:57:23.242146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.558 [2024-12-05 23:57:23.242155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.558 [2024-12-05 23:57:23.242205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.558 [2024-12-05 23:57:23.242215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.558 [2024-12-05 23:57:23.242223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.558 [2024-12-05 23:57:23.242231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.558 [2024-12-05 23:57:23.242249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.558 [2024-12-05 23:57:23.242260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.558 [2024-12-05 23:57:23.242268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.558 [2024-12-05 23:57:23.242275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.819 [2024-12-05 23:57:23.327586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.819 [2024-12-05 23:57:23.327652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.820 [2024-12-05 23:57:23.327667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.820 [2024-12-05 23:57:23.327676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.820 [2024-12-05 23:57:23.396660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.820 [2024-12-05 23:57:23.396713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.820 [2024-12-05 23:57:23.396725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.820 [2024-12-05 23:57:23.396734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.820 [2024-12-05 23:57:23.396799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.820 [2024-12-05 23:57:23.396810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.820 [2024-12-05 23:57:23.396819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.820 [2024-12-05 23:57:23.396828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.820 [2024-12-05 23:57:23.396863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.820 [2024-12-05 23:57:23.396872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.820 [2024-12-05 23:57:23.396887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.820 [2024-12-05 23:57:23.396895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.820 [2024-12-05 23:57:23.397013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.820 [2024-12-05 23:57:23.397025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.820 [2024-12-05 23:57:23.397036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.820 [2024-12-05 23:57:23.397043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.820 [2024-12-05 23:57:23.397074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.820 [2024-12-05 23:57:23.397085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.820 [2024-12-05 23:57:23.397093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.820 [2024-12-05 23:57:23.397104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.820 [2024-12-05 23:57:23.397148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.820 [2024-12-05 23:57:23.397158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.820 [2024-12-05 23:57:23.397167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.820 [2024-12-05 23:57:23.397174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.820 [2024-12-05 23:57:23.397223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.820 [2024-12-05 23:57:23.397234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.820 [2024-12-05 23:57:23.397248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.820 [2024-12-05 23:57:23.397256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.820 [2024-12-05 23:57:23.397415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.308 ms, result 0 00:19:51.776 00:19:51.776 00:19:51.776 23:57:24 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76718 00:19:51.776 23:57:24 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76718 00:19:51.776 23:57:24 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76718 ']' 00:19:51.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.776 23:57:24 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.776 23:57:24 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.776 23:57:24 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:51.776 23:57:24 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.776 23:57:24 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.776 23:57:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:51.776 [2024-12-05 23:57:24.370806] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:19:51.776 [2024-12-05 23:57:24.370980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76718 ] 00:19:52.036 [2024-12-05 23:57:24.527030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.036 [2024-12-05 23:57:24.663545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.978 23:57:25 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.978 23:57:25 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:52.978 23:57:25 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:52.978 [2024-12-05 23:57:25.581832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.978 [2024-12-05 23:57:25.581916] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:53.240 [2024-12-05 23:57:25.758991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.759056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:53.240 [2024-12-05 23:57:25.759074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:53.240 [2024-12-05 23:57:25.759083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.762120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.762163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.240 [2024-12-05 23:57:25.762176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.017 ms 00:19:53.240 [2024-12-05 23:57:25.762183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.762286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:53.240 [2024-12-05 23:57:25.763025] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:53.240 [2024-12-05 23:57:25.763052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.763061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.240 [2024-12-05 23:57:25.763072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:19:53.240 [2024-12-05 23:57:25.763079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.764656] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:53.240 [2024-12-05 23:57:25.778511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.778562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:53.240 [2024-12-05 23:57:25.778575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.860 ms 00:19:53.240 [2024-12-05 23:57:25.778585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.778696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.778710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:53.240 [2024-12-05 23:57:25.778720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:53.240 [2024-12-05 23:57:25.778729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.786515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.786561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.240 [2024-12-05 23:57:25.786571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.733 ms 00:19:53.240 [2024-12-05 23:57:25.786581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.786691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.786703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.240 [2024-12-05 23:57:25.786715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:53.240 [2024-12-05 23:57:25.786726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.786754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.786766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:53.240 [2024-12-05 23:57:25.786773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:53.240 [2024-12-05 23:57:25.786783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.786808] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:53.240 [2024-12-05 23:57:25.790667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.790708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.240 [2024-12-05 23:57:25.790721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.864 ms 00:19:53.240 [2024-12-05 23:57:25.790728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.790803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.790813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:53.240 [2024-12-05 23:57:25.790827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:53.240 [2024-12-05 23:57:25.790835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.790858] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:53.240 [2024-12-05 23:57:25.790880] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:53.240 [2024-12-05 23:57:25.790924] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:53.240 [2024-12-05 23:57:25.790941] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:53.240 [2024-12-05 23:57:25.791062] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:53.240 [2024-12-05 23:57:25.791076] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:53.240 [2024-12-05 23:57:25.791090] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:53.240 [2024-12-05 23:57:25.791099] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:53.240 [2024-12-05 23:57:25.791110] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:53.240 [2024-12-05 23:57:25.791118] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:53.240 [2024-12-05 23:57:25.791127] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:53.240 [2024-12-05 23:57:25.791134] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:53.240 [2024-12-05 23:57:25.791145] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:53.240 [2024-12-05 23:57:25.791152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.791162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:53.240 [2024-12-05 23:57:25.791170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:19:53.240 [2024-12-05 23:57:25.791181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.791269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.240 [2024-12-05 23:57:25.791279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:53.240 [2024-12-05 23:57:25.791286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:53.240 [2024-12-05 23:57:25.791295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.240 [2024-12-05 23:57:25.791395] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:53.240 [2024-12-05 23:57:25.791407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:53.240 [2024-12-05 23:57:25.791415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:53.240 [2024-12-05 23:57:25.791424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.240 [2024-12-05 23:57:25.791435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:53.240 [2024-12-05 23:57:25.791444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:53.240 [2024-12-05 23:57:25.791450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:53.240 [2024-12-05 23:57:25.791460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:53.240 [2024-12-05 23:57:25.791468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:53.240 [2024-12-05 23:57:25.791476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:53.240 [2024-12-05 23:57:25.791483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:53.240 [2024-12-05 23:57:25.791491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:53.240 [2024-12-05 23:57:25.791499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:53.240 [2024-12-05 23:57:25.791508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:53.241 [2024-12-05 23:57:25.791515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:53.241 [2024-12-05 23:57:25.791525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.241 [2024-12-05 23:57:25.791532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:53.241 [2024-12-05 23:57:25.791541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:53.241 [2024-12-05 23:57:25.791554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.241 [2024-12-05 23:57:25.791564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:53.241 [2024-12-05 23:57:25.791570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:53.241 [2024-12-05 23:57:25.791579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.241 [2024-12-05 23:57:25.791585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:53.241 [2024-12-05 23:57:25.791596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:53.241 [2024-12-05 23:57:25.791602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.241 [2024-12-05 23:57:25.791611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:53.241 [2024-12-05 23:57:25.791617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:53.241 [2024-12-05 23:57:25.791627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.241 [2024-12-05 23:57:25.791633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:53.241 [2024-12-05 23:57:25.791641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:53.241 [2024-12-05 23:57:25.791648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.241 [2024-12-05 23:57:25.791656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:53.241 [2024-12-05 23:57:25.791662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:53.241 [2024-12-05 23:57:25.791670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:53.241 [2024-12-05 23:57:25.791676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:53.241 [2024-12-05 23:57:25.791684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:53.241 [2024-12-05 23:57:25.791691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:53.241 [2024-12-05 23:57:25.791699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:53.241 [2024-12-05 23:57:25.791706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:53.241 [2024-12-05 23:57:25.791716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.241 [2024-12-05 23:57:25.791722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:53.241 [2024-12-05 23:57:25.791730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:53.241 [2024-12-05 23:57:25.791736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.241 [2024-12-05 23:57:25.791744] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:53.241 [2024-12-05 23:57:25.791753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:53.241 [2024-12-05 23:57:25.791763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:53.241 [2024-12-05 23:57:25.791770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.241 [2024-12-05 23:57:25.791779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:53.241 [2024-12-05 23:57:25.791786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:53.241 [2024-12-05 23:57:25.791795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:53.241 [2024-12-05 23:57:25.791802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:53.241 [2024-12-05 23:57:25.791810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:53.241 [2024-12-05 23:57:25.791816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:53.241 [2024-12-05 23:57:25.791827] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:53.241 [2024-12-05 23:57:25.791836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:53.241 [2024-12-05 23:57:25.791849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:53.241 [2024-12-05 23:57:25.791858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:53.241 [2024-12-05 23:57:25.791866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:53.241 [2024-12-05 23:57:25.791874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:53.241 [2024-12-05 23:57:25.791882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:53.241 [2024-12-05 23:57:25.791897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:53.241 [2024-12-05 23:57:25.791906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:53.241 [2024-12-05 23:57:25.791912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:53.241 [2024-12-05 23:57:25.791922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:53.241 [2024-12-05 23:57:25.791929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:53.241 [2024-12-05 23:57:25.791938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:53.241 [2024-12-05 23:57:25.791945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:53.241 [2024-12-05 23:57:25.791953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:53.241 [2024-12-05 23:57:25.791960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:53.241 [2024-12-05 23:57:25.791980] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:53.241 [2024-12-05 23:57:25.791989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:53.241 [2024-12-05 23:57:25.792001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:53.241 [2024-12-05 23:57:25.792008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:53.241 [2024-12-05 23:57:25.792017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:53.241 [2024-12-05 23:57:25.792024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:53.241 [2024-12-05 23:57:25.792033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.241 [2024-12-05 23:57:25.792041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:53.241 [2024-12-05 23:57:25.792053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:19:53.241 [2024-12-05 23:57:25.792060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.241 [2024-12-05 23:57:25.823902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.241 [2024-12-05 23:57:25.823955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.241 [2024-12-05 23:57:25.823985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.779 ms 00:19:53.241 [2024-12-05 23:57:25.823995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.241 [2024-12-05 23:57:25.824127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.241 [2024-12-05 23:57:25.824138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:53.241 [2024-12-05 23:57:25.824149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:53.241 [2024-12-05 23:57:25.824157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.241 [2024-12-05 23:57:25.860047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.241 [2024-12-05 23:57:25.860097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.241 [2024-12-05 23:57:25.860110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.863 ms 00:19:53.241 [2024-12-05 23:57:25.860119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.241 [2024-12-05 23:57:25.860213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.241 [2024-12-05 23:57:25.860224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.241 [2024-12-05 23:57:25.860236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:53.241 [2024-12-05 23:57:25.860243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.241 [2024-12-05 23:57:25.860805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.241 [2024-12-05 23:57:25.860826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.241 [2024-12-05 23:57:25.860839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:19:53.241 [2024-12-05 23:57:25.860846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.241 [2024-12-05 23:57:25.861021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.241 [2024-12-05 23:57:25.861038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.241 [2024-12-05 23:57:25.861049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:19:53.241 [2024-12-05 23:57:25.861057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.241 [2024-12-05 23:57:25.879790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.241 [2024-12-05 23:57:25.879836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.241 [2024-12-05 23:57:25.879850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.707 ms 00:19:53.241 [2024-12-05 23:57:25.879858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.241 [2024-12-05 23:57:25.913310] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:53.241 [2024-12-05 23:57:25.913368] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:53.241 [2024-12-05 23:57:25.913390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.241 [2024-12-05 23:57:25.913401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:53.241 [2024-12-05 23:57:25.913413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.391 ms 00:19:53.241 [2024-12-05 23:57:25.913428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.241 [2024-12-05 23:57:25.939765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.241 [2024-12-05 23:57:25.939818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:53.241 [2024-12-05 23:57:25.939834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.236 ms 00:19:53.241 [2024-12-05 23:57:25.939845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.502 [2024-12-05 23:57:25.952389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.502 [2024-12-05 23:57:25.952435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:53.502 [2024-12-05 23:57:25.952453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.448 ms 00:19:53.502 [2024-12-05 23:57:25.952461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.502 [2024-12-05 23:57:25.964995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.502 [2024-12-05 23:57:25.965042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:53.502 [2024-12-05 23:57:25.965056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.447 ms 00:19:53.502 [2024-12-05 23:57:25.965063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.502 [2024-12-05 23:57:25.965724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.502 [2024-12-05 23:57:25.965748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:53.502 [2024-12-05 23:57:25.965761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:19:53.502 [2024-12-05 23:57:25.965769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.502 [2024-12-05 23:57:26.033280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.502 [2024-12-05 23:57:26.033349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:53.502 [2024-12-05 23:57:26.033367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.480 ms 00:19:53.502 [2024-12-05 23:57:26.033377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.502 [2024-12-05 23:57:26.044729] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:53.502 [2024-12-05 23:57:26.065251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.502 [2024-12-05 23:57:26.065317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:53.502 [2024-12-05 23:57:26.065331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.778 ms 00:19:53.502 [2024-12-05 23:57:26.065341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.502 [2024-12-05 23:57:26.065457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.502 [2024-12-05 23:57:26.065471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:53.502 [2024-12-05 23:57:26.065481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:53.502 [2024-12-05 23:57:26.065491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.502 [2024-12-05 23:57:26.065549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.502 [2024-12-05 23:57:26.065561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:53.502 [2024-12-05 23:57:26.065573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:53.502 [2024-12-05 23:57:26.065582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.502 [2024-12-05 23:57:26.065608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.502 [2024-12-05 23:57:26.065621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:53.502 [2024-12-05 23:57:26.065630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:53.502 [2024-12-05 23:57:26.065640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.502 [2024-12-05 23:57:26.065676] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:53.502 [2024-12-05 23:57:26.065693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.502 [2024-12-05 23:57:26.065701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:53.502 [2024-12-05 23:57:26.065711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:53.502 [2024-12-05 23:57:26.065721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.502 [2024-12-05 23:57:26.091906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.503 [2024-12-05 23:57:26.091957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:53.503 [2024-12-05 23:57:26.091984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.157 ms 00:19:53.503 [2024-12-05 23:57:26.091994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.503 [2024-12-05 23:57:26.092107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.503 [2024-12-05 23:57:26.092118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:53.503 [2024-12-05 23:57:26.092132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:53.503 [2024-12-05 23:57:26.092140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.503 [2024-12-05 23:57:26.093793] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:53.503 [2024-12-05 23:57:26.097120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 334.478 ms, result 0 00:19:53.503 [2024-12-05 23:57:26.098711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:53.503 Some configs were skipped because the RPC state that can call them passed over. 00:19:53.503 23:57:26 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:53.762 [2024-12-05 23:57:26.339615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.762 [2024-12-05 23:57:26.339688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:53.762 [2024-12-05 23:57:26.339701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.509 ms 00:19:53.762 [2024-12-05 23:57:26.339712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.762 [2024-12-05 23:57:26.339751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.651 ms, result 0 00:19:53.762 true 00:19:53.762 23:57:26 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:54.023 [2024-12-05 23:57:26.543315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.023 [2024-12-05 23:57:26.543378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:54.023 [2024-12-05 23:57:26.543395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.929 ms 00:19:54.023 [2024-12-05 23:57:26.543403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.023 [2024-12-05 23:57:26.543446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.069 ms, result 0 00:19:54.023 true 00:19:54.023 23:57:26 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76718 00:19:54.023 23:57:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76718 ']' 00:19:54.023 23:57:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76718 00:19:54.023 23:57:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:54.023 23:57:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.023 23:57:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76718 00:19:54.023 23:57:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:54.023 23:57:26 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:54.023 killing process with pid 76718 00:19:54.023 23:57:26 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76718' 00:19:54.023 23:57:26 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76718 00:19:54.023 23:57:26 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76718 00:19:54.965 [2024-12-05 23:57:27.332315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.332371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:54.965 [2024-12-05 23:57:27.332384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:54.965 [2024-12-05 23:57:27.332395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.332417] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:54.965 [2024-12-05 23:57:27.335052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.335080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:54.965 [2024-12-05 23:57:27.335094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.619 ms 00:19:54.965 [2024-12-05 23:57:27.335103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.336520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.336548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:54.965 [2024-12-05 23:57:27.336558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:19:54.965 [2024-12-05 23:57:27.336566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.340608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.340637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:54.965 [2024-12-05 23:57:27.340648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.023 ms 00:19:54.965 [2024-12-05 23:57:27.340655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.347588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.347625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:54.965 [2024-12-05 23:57:27.347639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.900 ms 00:19:54.965 [2024-12-05 23:57:27.347647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.357479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.357514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:54.965 [2024-12-05 23:57:27.357527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.784 ms 00:19:54.965 [2024-12-05 23:57:27.357535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.365571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.365600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:54.965 [2024-12-05 23:57:27.365613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.001 ms 00:19:54.965 [2024-12-05 23:57:27.365621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.365761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.365771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:54.965 [2024-12-05 23:57:27.365781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:54.965 [2024-12-05 23:57:27.365788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.375649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.375789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:54.965 [2024-12-05 23:57:27.375808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.841 ms 00:19:54.965 [2024-12-05 23:57:27.375815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.385288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.385315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:54.965 [2024-12-05 23:57:27.385329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.427 ms 00:19:54.965 [2024-12-05 23:57:27.385336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.394437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.394544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:54.965 [2024-12-05 23:57:27.394560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.067 ms 00:19:54.965 [2024-12-05 23:57:27.394567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.403647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.965 [2024-12-05 23:57:27.403676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:54.965 [2024-12-05 23:57:27.403687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.976 ms 00:19:54.965 [2024-12-05 23:57:27.403694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.965 [2024-12-05 23:57:27.403725] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:54.965 [2024-12-05 23:57:27.403738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.403994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.404003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.404010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.404019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.404026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:54.965 [2024-12-05 23:57:27.404035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:54.966 [2024-12-05 23:57:27.404622] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:54.966 [2024-12-05 23:57:27.404635] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc09c2db-6e6c-420a-8d4b-8435768d4837 00:19:54.966 [2024-12-05 23:57:27.404643] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:54.966 [2024-12-05 23:57:27.404652] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:54.966 [2024-12-05 23:57:27.404659] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:54.966 [2024-12-05 23:57:27.404668] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:54.966 [2024-12-05 23:57:27.404675] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:54.966 [2024-12-05 23:57:27.404684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:54.966 [2024-12-05 23:57:27.404691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:54.966 [2024-12-05 23:57:27.404699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:54.966 [2024-12-05 23:57:27.404705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:54.966 [2024-12-05 23:57:27.404713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.966 [2024-12-05 23:57:27.404720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:54.966 [2024-12-05 23:57:27.404730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:19:54.966 [2024-12-05 23:57:27.404739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.966 [2024-12-05 23:57:27.417170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.966 [2024-12-05 23:57:27.417277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:54.966 [2024-12-05 23:57:27.417296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.411 ms 00:19:54.966 [2024-12-05 23:57:27.417303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.966 [2024-12-05 23:57:27.417659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.966 [2024-12-05 23:57:27.417671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:54.966 [2024-12-05 23:57:27.417681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:19:54.966 [2024-12-05 23:57:27.417688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.966 [2024-12-05 23:57:27.461460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.966 [2024-12-05 23:57:27.461492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.966 [2024-12-05 23:57:27.461505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.966 [2024-12-05 23:57:27.461514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.966 [2024-12-05 23:57:27.461608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.966 [2024-12-05 23:57:27.461619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.967 [2024-12-05 23:57:27.461629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.461637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.461675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.967 [2024-12-05 23:57:27.461685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.967 [2024-12-05 23:57:27.461696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.461703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.461721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.967 [2024-12-05 23:57:27.461729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.967 [2024-12-05 23:57:27.461738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.461746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.537611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.967 [2024-12-05 23:57:27.537650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.967 [2024-12-05 23:57:27.537663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.537671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.600528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.967 [2024-12-05 23:57:27.600668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.967 [2024-12-05 23:57:27.600689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.600697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.600776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.967 [2024-12-05 23:57:27.600786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.967 [2024-12-05 23:57:27.600798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.600805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.600835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.967 [2024-12-05 23:57:27.600843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.967 [2024-12-05 23:57:27.600853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.600860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.600954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.967 [2024-12-05 23:57:27.600963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.967 [2024-12-05 23:57:27.600988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.600995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.601030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.967 [2024-12-05 23:57:27.601038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:54.967 [2024-12-05 23:57:27.601048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.601056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.601097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.967 [2024-12-05 23:57:27.601105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.967 [2024-12-05 23:57:27.601116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.601123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.601166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.967 [2024-12-05 23:57:27.601175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.967 [2024-12-05 23:57:27.601185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.967 [2024-12-05 23:57:27.601192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.967 [2024-12-05 23:57:27.601320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 268.987 ms, result 0 00:19:55.538 23:57:28 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:55.538 23:57:28 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.538 [2024-12-05 23:57:28.241046] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:19:55.538 [2024-12-05 23:57:28.241165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76772 ] 00:19:55.799 [2024-12-05 23:57:28.399530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.799 [2024-12-05 23:57:28.498308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.372 [2024-12-05 23:57:28.770792] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:56.372 [2024-12-05 23:57:28.770869] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:56.372 [2024-12-05 23:57:28.933101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.933345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:56.373 [2024-12-05 23:57:28.933371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:56.373 [2024-12-05 23:57:28.933381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.936474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.936664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.373 [2024-12-05 23:57:28.936684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.065 ms 00:19:56.373 [2024-12-05 23:57:28.936694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.936957] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:56.373 [2024-12-05 23:57:28.937792] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:56.373 [2024-12-05 23:57:28.937828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.937838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.373 [2024-12-05 23:57:28.937848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:19:56.373 [2024-12-05 23:57:28.937856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.939701] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:56.373 [2024-12-05 23:57:28.954605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.954657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:56.373 [2024-12-05 23:57:28.954672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.905 ms 00:19:56.373 [2024-12-05 23:57:28.954682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.954816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.954829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:56.373 [2024-12-05 23:57:28.954839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:56.373 [2024-12-05 23:57:28.954848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.964136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.964181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.373 [2024-12-05 23:57:28.964192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.241 ms 00:19:56.373 [2024-12-05 23:57:28.964201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.964325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.964336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.373 [2024-12-05 23:57:28.964346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:56.373 [2024-12-05 23:57:28.964355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.964388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.964397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:56.373 [2024-12-05 23:57:28.964406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:56.373 [2024-12-05 23:57:28.964414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.964437] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:56.373 [2024-12-05 23:57:28.968810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.968853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.373 [2024-12-05 23:57:28.968864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.380 ms 00:19:56.373 [2024-12-05 23:57:28.968872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.968955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.968991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:56.373 [2024-12-05 23:57:28.969002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:56.373 [2024-12-05 23:57:28.969010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.969037] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:56.373 [2024-12-05 23:57:28.969062] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:56.373 [2024-12-05 23:57:28.969100] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:56.373 [2024-12-05 23:57:28.969117] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:56.373 [2024-12-05 23:57:28.969224] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:56.373 [2024-12-05 23:57:28.969237] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:56.373 [2024-12-05 23:57:28.969248] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:56.373 [2024-12-05 23:57:28.969262] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:56.373 [2024-12-05 23:57:28.969272] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:56.373 [2024-12-05 23:57:28.969281] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:56.373 [2024-12-05 23:57:28.969289] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:56.373 [2024-12-05 23:57:28.969296] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:56.373 [2024-12-05 23:57:28.969304] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:56.373 [2024-12-05 23:57:28.969312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.969320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:56.373 [2024-12-05 23:57:28.969328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:19:56.373 [2024-12-05 23:57:28.969335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.969424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.373 [2024-12-05 23:57:28.969437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:56.373 [2024-12-05 23:57:28.969445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:56.373 [2024-12-05 23:57:28.969452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.373 [2024-12-05 23:57:28.969559] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:56.373 [2024-12-05 23:57:28.969570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:56.373 [2024-12-05 23:57:28.969578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.373 [2024-12-05 23:57:28.969586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.373 [2024-12-05 23:57:28.969595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:56.373 [2024-12-05 23:57:28.969602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:56.373 [2024-12-05 23:57:28.969609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:56.373 [2024-12-05 23:57:28.969616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:56.373 [2024-12-05 23:57:28.969623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:56.373 [2024-12-05 23:57:28.969630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.373 [2024-12-05 23:57:28.969637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:56.373 [2024-12-05 23:57:28.969652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:56.373 [2024-12-05 23:57:28.969658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.373 [2024-12-05 23:57:28.969665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:56.373 [2024-12-05 23:57:28.969672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:56.373 [2024-12-05 23:57:28.969679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.373 [2024-12-05 23:57:28.969688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:56.373 [2024-12-05 23:57:28.969696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:56.373 [2024-12-05 23:57:28.969703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.373 [2024-12-05 23:57:28.969711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:56.373 [2024-12-05 23:57:28.969718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:56.373 [2024-12-05 23:57:28.969725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.373 [2024-12-05 23:57:28.969732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:56.373 [2024-12-05 23:57:28.969739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:56.373 [2024-12-05 23:57:28.969745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.373 [2024-12-05 23:57:28.969752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:56.373 [2024-12-05 23:57:28.969759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:56.373 [2024-12-05 23:57:28.969765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.373 [2024-12-05 23:57:28.969772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:56.373 [2024-12-05 23:57:28.969778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:56.373 [2024-12-05 23:57:28.969785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.373 [2024-12-05 23:57:28.969791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:56.373 [2024-12-05 23:57:28.969798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:56.373 [2024-12-05 23:57:28.969804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.373 [2024-12-05 23:57:28.969811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:56.373 [2024-12-05 23:57:28.969818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:56.373 [2024-12-05 23:57:28.969824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.373 [2024-12-05 23:57:28.969830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:56.373 [2024-12-05 23:57:28.969837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:56.373 [2024-12-05 23:57:28.969844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.374 [2024-12-05 23:57:28.969850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:56.374 [2024-12-05 23:57:28.969857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:56.374 [2024-12-05 23:57:28.969863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.374 [2024-12-05 23:57:28.969869] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:56.374 [2024-12-05 23:57:28.969877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:56.374 [2024-12-05 23:57:28.969886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.374 [2024-12-05 23:57:28.969894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.374 [2024-12-05 23:57:28.969902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:56.374 [2024-12-05 23:57:28.969912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:56.374 [2024-12-05 23:57:28.969919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:56.374 [2024-12-05 23:57:28.969926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:56.374 [2024-12-05 23:57:28.969933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:56.374 [2024-12-05 23:57:28.969940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:56.374 [2024-12-05 23:57:28.969948] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:56.374 [2024-12-05 23:57:28.969960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.374 [2024-12-05 23:57:28.969983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:56.374 [2024-12-05 23:57:28.969991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:56.374 [2024-12-05 23:57:28.969998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:56.374 [2024-12-05 23:57:28.970005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:56.374 [2024-12-05 23:57:28.970013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:56.374 [2024-12-05 23:57:28.970020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:56.374 [2024-12-05 23:57:28.970028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:56.374 [2024-12-05 23:57:28.970034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:56.374 [2024-12-05 23:57:28.970042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:56.374 [2024-12-05 23:57:28.970050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:56.374 [2024-12-05 23:57:28.970057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:56.374 [2024-12-05 23:57:28.970064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:56.374 [2024-12-05 23:57:28.970072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:56.374 [2024-12-05 23:57:28.970079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:56.374 [2024-12-05 23:57:28.970086] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:56.374 [2024-12-05 23:57:28.970095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.374 [2024-12-05 23:57:28.970105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:56.374 [2024-12-05 23:57:28.970112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:56.374 [2024-12-05 23:57:28.970120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:56.374 [2024-12-05 23:57:28.970127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:56.374 [2024-12-05 23:57:28.970136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.374 [2024-12-05 23:57:28.970148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:56.374 [2024-12-05 23:57:28.970155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:19:56.374 [2024-12-05 23:57:28.970162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.374 [2024-12-05 23:57:29.003849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.374 [2024-12-05 23:57:29.003904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.374 [2024-12-05 23:57:29.003916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.622 ms 00:19:56.374 [2024-12-05 23:57:29.003924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.374 [2024-12-05 23:57:29.004100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.374 [2024-12-05 23:57:29.004113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:56.374 [2024-12-05 23:57:29.004123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:56.374 [2024-12-05 23:57:29.004131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.374 [2024-12-05 23:57:29.057281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.374 [2024-12-05 23:57:29.057330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.374 [2024-12-05 23:57:29.057347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.126 ms 00:19:56.374 [2024-12-05 23:57:29.057356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.374 [2024-12-05 23:57:29.057463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.374 [2024-12-05 23:57:29.057476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.374 [2024-12-05 23:57:29.057485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:56.374 [2024-12-05 23:57:29.057494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.374 [2024-12-05 23:57:29.058029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.374 [2024-12-05 23:57:29.058085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.374 [2024-12-05 23:57:29.058104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:19:56.374 [2024-12-05 23:57:29.058113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.374 [2024-12-05 23:57:29.058261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.374 [2024-12-05 23:57:29.058271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.374 [2024-12-05 23:57:29.058280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:19:56.374 [2024-12-05 23:57:29.058288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.374 [2024-12-05 23:57:29.074486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.374 [2024-12-05 23:57:29.074528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.374 [2024-12-05 23:57:29.074540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.176 ms 00:19:56.374 [2024-12-05 23:57:29.074548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.635 [2024-12-05 23:57:29.088664] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:56.635 [2024-12-05 23:57:29.088711] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:56.635 [2024-12-05 23:57:29.088725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.088734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:56.636 [2024-12-05 23:57:29.088744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.070 ms 00:19:56.636 [2024-12-05 23:57:29.088752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.117360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.117428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:56.636 [2024-12-05 23:57:29.117442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.513 ms 00:19:56.636 [2024-12-05 23:57:29.117450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.130490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.130536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:56.636 [2024-12-05 23:57:29.130548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.940 ms 00:19:56.636 [2024-12-05 23:57:29.130556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.143517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.143726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:56.636 [2024-12-05 23:57:29.143747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.869 ms 00:19:56.636 [2024-12-05 23:57:29.143756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.144456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.144484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:56.636 [2024-12-05 23:57:29.144496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:19:56.636 [2024-12-05 23:57:29.144505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.213370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.213431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:56.636 [2024-12-05 23:57:29.213448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.835 ms 00:19:56.636 [2024-12-05 23:57:29.213458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.225103] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:56.636 [2024-12-05 23:57:29.245942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.246012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:56.636 [2024-12-05 23:57:29.246027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.378 ms 00:19:56.636 [2024-12-05 23:57:29.246043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.246145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.246157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:56.636 [2024-12-05 23:57:29.246168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:56.636 [2024-12-05 23:57:29.246176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.246239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.246250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:56.636 [2024-12-05 23:57:29.246259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:56.636 [2024-12-05 23:57:29.246271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.246304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.246314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:56.636 [2024-12-05 23:57:29.246322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:56.636 [2024-12-05 23:57:29.246331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.246370] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:56.636 [2024-12-05 23:57:29.246381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.246390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:56.636 [2024-12-05 23:57:29.246398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:56.636 [2024-12-05 23:57:29.246407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.273105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.273157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:56.636 [2024-12-05 23:57:29.273172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.675 ms 00:19:56.636 [2024-12-05 23:57:29.273182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.273316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.636 [2024-12-05 23:57:29.273328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:56.636 [2024-12-05 23:57:29.273338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:56.636 [2024-12-05 23:57:29.273347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.636 [2024-12-05 23:57:29.274528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.636 [2024-12-05 23:57:29.278201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.093 ms, result 0 00:19:56.636 [2024-12-05 23:57:29.279474] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:56.636 [2024-12-05 23:57:29.293056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.027  [2024-12-05T23:57:31.310Z] Copying: 18/256 [MB] (18 MBps) [2024-12-05T23:57:32.692Z] Copying: 34/256 [MB] (16 MBps) [2024-12-05T23:57:33.636Z] Copying: 46/256 [MB] (11 MBps) [2024-12-05T23:57:34.582Z] Copying: 58/256 [MB] (12 MBps) [2024-12-05T23:57:35.524Z] Copying: 77/256 [MB] (18 MBps) [2024-12-05T23:57:36.466Z] Copying: 96/256 [MB] (19 MBps) [2024-12-05T23:57:37.402Z] Copying: 110/256 [MB] (13 MBps) [2024-12-05T23:57:38.360Z] Copying: 125/256 [MB] (14 MBps) [2024-12-05T23:57:39.733Z] Copying: 140/256 [MB] (15 MBps) [2024-12-05T23:57:40.300Z] Copying: 152/256 [MB] (12 MBps) [2024-12-05T23:57:41.673Z] Copying: 171/256 [MB] (19 MBps) [2024-12-05T23:57:42.604Z] Copying: 186/256 [MB] (14 MBps) [2024-12-05T23:57:43.537Z] Copying: 210/256 [MB] (23 MBps) [2024-12-05T23:57:44.470Z] Copying: 231/256 [MB] (21 MBps) [2024-12-05T23:57:45.037Z] Copying: 251/256 [MB] (19 MBps) [2024-12-05T23:57:45.037Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-05 23:57:44.739781] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:12.328 [2024-12-05 23:57:44.748949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.328 [2024-12-05 23:57:44.748994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:12.328 [2024-12-05 23:57:44.749012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:12.328 [2024-12-05 23:57:44.749020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.328 [2024-12-05 23:57:44.749041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:12.328 [2024-12-05 23:57:44.751611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.328 [2024-12-05 23:57:44.751636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:12.328 [2024-12-05 23:57:44.751646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.557 ms 00:20:12.328 [2024-12-05 23:57:44.751655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.328 [2024-12-05 23:57:44.751908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.328 [2024-12-05 23:57:44.751918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:12.328 [2024-12-05 23:57:44.751927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:20:12.328 [2024-12-05 23:57:44.751935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.328 [2024-12-05 23:57:44.755631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.328 [2024-12-05 23:57:44.755649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:12.328 [2024-12-05 23:57:44.755659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.678 ms 00:20:12.328 [2024-12-05 23:57:44.755667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.328 [2024-12-05 23:57:44.762640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.328 [2024-12-05 23:57:44.762757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:12.329 [2024-12-05 23:57:44.762773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.956 ms 00:20:12.329 [2024-12-05 23:57:44.762782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.329 [2024-12-05 23:57:44.786511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.329 [2024-12-05 23:57:44.786543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:12.329 [2024-12-05 23:57:44.786553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.671 ms 00:20:12.329 [2024-12-05 23:57:44.786561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.329 [2024-12-05 23:57:44.800834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.329 [2024-12-05 23:57:44.800865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:12.329 [2024-12-05 23:57:44.800880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.240 ms 00:20:12.329 [2024-12-05 23:57:44.800889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.329 [2024-12-05 23:57:44.801037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.329 [2024-12-05 23:57:44.801049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:12.329 [2024-12-05 23:57:44.801064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:12.329 [2024-12-05 23:57:44.801071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.329 [2024-12-05 23:57:44.824807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.329 [2024-12-05 23:57:44.824928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:12.329 [2024-12-05 23:57:44.824943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.721 ms 00:20:12.329 [2024-12-05 23:57:44.824950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.329 [2024-12-05 23:57:44.848565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.329 [2024-12-05 23:57:44.848673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:12.329 [2024-12-05 23:57:44.848686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.565 ms 00:20:12.329 [2024-12-05 23:57:44.848693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.329 [2024-12-05 23:57:44.871589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.329 [2024-12-05 23:57:44.871696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:12.329 [2024-12-05 23:57:44.871710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.867 ms 00:20:12.329 [2024-12-05 23:57:44.871717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.329 [2024-12-05 23:57:44.894987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.329 [2024-12-05 23:57:44.895016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:12.329 [2024-12-05 23:57:44.895026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.202 ms 00:20:12.329 [2024-12-05 23:57:44.895033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.329 [2024-12-05 23:57:44.895064] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:12.329 [2024-12-05 23:57:44.895078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:12.329 [2024-12-05 23:57:44.895537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:12.330 [2024-12-05 23:57:44.895821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:12.330 [2024-12-05 23:57:44.895829] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc09c2db-6e6c-420a-8d4b-8435768d4837 00:20:12.330 [2024-12-05 23:57:44.895837] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:12.330 [2024-12-05 23:57:44.895844] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:12.330 [2024-12-05 23:57:44.895851] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:12.330 [2024-12-05 23:57:44.895858] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:12.330 [2024-12-05 23:57:44.895864] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:12.330 [2024-12-05 23:57:44.895872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:12.330 [2024-12-05 23:57:44.895881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:12.330 [2024-12-05 23:57:44.895887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:12.330 [2024-12-05 23:57:44.895894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:12.330 [2024-12-05 23:57:44.895901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.330 [2024-12-05 23:57:44.895908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:12.330 [2024-12-05 23:57:44.895916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:20:12.330 [2024-12-05 23:57:44.895923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.330 [2024-12-05 23:57:44.908223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.330 [2024-12-05 23:57:44.908256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:12.330 [2024-12-05 23:57:44.908267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.273 ms 00:20:12.330 [2024-12-05 23:57:44.908275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.330 [2024-12-05 23:57:44.908623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.330 [2024-12-05 23:57:44.908639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:12.330 [2024-12-05 23:57:44.908647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:20:12.330 [2024-12-05 23:57:44.908654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.330 [2024-12-05 23:57:44.943385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.330 [2024-12-05 23:57:44.943414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:12.330 [2024-12-05 23:57:44.943423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.330 [2024-12-05 23:57:44.943434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.330 [2024-12-05 23:57:44.943499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.330 [2024-12-05 23:57:44.943507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:12.330 [2024-12-05 23:57:44.943515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.330 [2024-12-05 23:57:44.943521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.330 [2024-12-05 23:57:44.943563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.330 [2024-12-05 23:57:44.943572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:12.330 [2024-12-05 23:57:44.943580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.330 [2024-12-05 23:57:44.943587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.330 [2024-12-05 23:57:44.943606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.330 [2024-12-05 23:57:44.943614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:12.330 [2024-12-05 23:57:44.943621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.330 [2024-12-05 23:57:44.943628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.330 [2024-12-05 23:57:45.019209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.330 [2024-12-05 23:57:45.019250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:12.330 [2024-12-05 23:57:45.019261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.330 [2024-12-05 23:57:45.019268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.600 [2024-12-05 23:57:45.081227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.600 [2024-12-05 23:57:45.081266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:12.600 [2024-12-05 23:57:45.081276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.600 [2024-12-05 23:57:45.081284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.600 [2024-12-05 23:57:45.081328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.600 [2024-12-05 23:57:45.081337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:12.600 [2024-12-05 23:57:45.081345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.600 [2024-12-05 23:57:45.081352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.600 [2024-12-05 23:57:45.081379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.600 [2024-12-05 23:57:45.081391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:12.600 [2024-12-05 23:57:45.081399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.600 [2024-12-05 23:57:45.081406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.600 [2024-12-05 23:57:45.081489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.600 [2024-12-05 23:57:45.081498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:12.600 [2024-12-05 23:57:45.081506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.600 [2024-12-05 23:57:45.081513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.600 [2024-12-05 23:57:45.081542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.600 [2024-12-05 23:57:45.081551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:12.600 [2024-12-05 23:57:45.081561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.600 [2024-12-05 23:57:45.081568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.600 [2024-12-05 23:57:45.081602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.600 [2024-12-05 23:57:45.081610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:12.600 [2024-12-05 23:57:45.081618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.600 [2024-12-05 23:57:45.081625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.600 [2024-12-05 23:57:45.081665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.600 [2024-12-05 23:57:45.081677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:12.600 [2024-12-05 23:57:45.081685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.600 [2024-12-05 23:57:45.081692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.600 [2024-12-05 23:57:45.081817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.860 ms, result 0 00:20:13.167 00:20:13.167 00:20:13.167 23:57:45 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:13.167 23:57:45 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:13.730 23:57:46 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:13.730 [2024-12-05 23:57:46.372497] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:20:13.730 [2024-12-05 23:57:46.372798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76965 ] 00:20:13.988 [2024-12-05 23:57:46.531822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.988 [2024-12-05 23:57:46.627176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.246 [2024-12-05 23:57:46.883982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.246 [2024-12-05 23:57:46.884044] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.505 [2024-12-05 23:57:47.042753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.505 [2024-12-05 23:57:47.042798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.505 [2024-12-05 23:57:47.042811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.505 [2024-12-05 23:57:47.042820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.505 [2024-12-05 23:57:47.045541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.505 [2024-12-05 23:57:47.045575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.505 [2024-12-05 23:57:47.045585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:20:14.505 [2024-12-05 23:57:47.045593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.505 [2024-12-05 23:57:47.045662] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.505 [2024-12-05 23:57:47.046687] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.505 [2024-12-05 23:57:47.046728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.505 [2024-12-05 23:57:47.046738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.505 [2024-12-05 23:57:47.046748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:20:14.505 [2024-12-05 23:57:47.046755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.505 [2024-12-05 23:57:47.048617] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:14.505 [2024-12-05 23:57:47.061159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.505 [2024-12-05 23:57:47.061287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:14.505 [2024-12-05 23:57:47.061305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.543 ms 00:20:14.505 [2024-12-05 23:57:47.061314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.505 [2024-12-05 23:57:47.061394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.505 [2024-12-05 23:57:47.061406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:14.505 [2024-12-05 23:57:47.061414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:14.505 [2024-12-05 23:57:47.061421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.505 [2024-12-05 23:57:47.066182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.505 [2024-12-05 23:57:47.066210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.505 [2024-12-05 23:57:47.066219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.720 ms 00:20:14.505 [2024-12-05 23:57:47.066227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.506 [2024-12-05 23:57:47.066313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.506 [2024-12-05 23:57:47.066323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.506 [2024-12-05 23:57:47.066331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:14.506 [2024-12-05 23:57:47.066338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.506 [2024-12-05 23:57:47.066367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.506 [2024-12-05 23:57:47.066375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.506 [2024-12-05 23:57:47.066383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:14.506 [2024-12-05 23:57:47.066389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.506 [2024-12-05 23:57:47.066410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:14.506 [2024-12-05 23:57:47.069811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.506 [2024-12-05 23:57:47.069838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.506 [2024-12-05 23:57:47.069847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.407 ms 00:20:14.506 [2024-12-05 23:57:47.069854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.506 [2024-12-05 23:57:47.069890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.506 [2024-12-05 23:57:47.069899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.506 [2024-12-05 23:57:47.069908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:14.506 [2024-12-05 23:57:47.069915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.506 [2024-12-05 23:57:47.069934] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:14.506 [2024-12-05 23:57:47.069952] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:14.506 [2024-12-05 23:57:47.070002] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:14.506 [2024-12-05 23:57:47.070018] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:14.506 [2024-12-05 23:57:47.070119] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:14.506 [2024-12-05 23:57:47.070130] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.506 [2024-12-05 23:57:47.070140] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:14.506 [2024-12-05 23:57:47.070152] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070161] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070169] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:14.506 [2024-12-05 23:57:47.070176] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.506 [2024-12-05 23:57:47.070184] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:14.506 [2024-12-05 23:57:47.070191] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:14.506 [2024-12-05 23:57:47.070198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.506 [2024-12-05 23:57:47.070206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.506 [2024-12-05 23:57:47.070213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:14.506 [2024-12-05 23:57:47.070220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.506 [2024-12-05 23:57:47.070309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.506 [2024-12-05 23:57:47.070319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.506 [2024-12-05 23:57:47.070327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:14.506 [2024-12-05 23:57:47.070333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.506 [2024-12-05 23:57:47.070443] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.506 [2024-12-05 23:57:47.070453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.506 [2024-12-05 23:57:47.070462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.506 [2024-12-05 23:57:47.070484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.506 [2024-12-05 23:57:47.070505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.506 [2024-12-05 23:57:47.070518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.506 [2024-12-05 23:57:47.070531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:14.506 [2024-12-05 23:57:47.070537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.506 [2024-12-05 23:57:47.070544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.506 [2024-12-05 23:57:47.070552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:14.506 [2024-12-05 23:57:47.070559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.506 [2024-12-05 23:57:47.070573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.506 [2024-12-05 23:57:47.070593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.506 [2024-12-05 23:57:47.070613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.506 [2024-12-05 23:57:47.070634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.506 [2024-12-05 23:57:47.070653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.506 [2024-12-05 23:57:47.070672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.506 [2024-12-05 23:57:47.070686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.506 [2024-12-05 23:57:47.070692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:14.506 [2024-12-05 23:57:47.070698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.506 [2024-12-05 23:57:47.070705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:14.506 [2024-12-05 23:57:47.070711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:14.506 [2024-12-05 23:57:47.070718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:14.506 [2024-12-05 23:57:47.070731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:14.506 [2024-12-05 23:57:47.070738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070745] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.506 [2024-12-05 23:57:47.070752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.506 [2024-12-05 23:57:47.070762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.506 [2024-12-05 23:57:47.070777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.506 [2024-12-05 23:57:47.070784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.506 [2024-12-05 23:57:47.070791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.506 [2024-12-05 23:57:47.070798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.506 [2024-12-05 23:57:47.070804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.506 [2024-12-05 23:57:47.070810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.506 [2024-12-05 23:57:47.070818] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.506 [2024-12-05 23:57:47.070827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.506 [2024-12-05 23:57:47.070835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:14.506 [2024-12-05 23:57:47.070843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:14.506 [2024-12-05 23:57:47.070850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:14.506 [2024-12-05 23:57:47.070857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:14.506 [2024-12-05 23:57:47.070864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:14.506 [2024-12-05 23:57:47.070871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:14.506 [2024-12-05 23:57:47.070877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:14.506 [2024-12-05 23:57:47.070884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:14.506 [2024-12-05 23:57:47.070891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:14.507 [2024-12-05 23:57:47.070898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:14.507 [2024-12-05 23:57:47.070905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:14.507 [2024-12-05 23:57:47.070911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:14.507 [2024-12-05 23:57:47.070918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:14.507 [2024-12-05 23:57:47.070925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:14.507 [2024-12-05 23:57:47.070932] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.507 [2024-12-05 23:57:47.070940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.507 [2024-12-05 23:57:47.070948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.507 [2024-12-05 23:57:47.070955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.507 [2024-12-05 23:57:47.070962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.507 [2024-12-05 23:57:47.070980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.507 [2024-12-05 23:57:47.070988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.070998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.507 [2024-12-05 23:57:47.071005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:20:14.507 [2024-12-05 23:57:47.071013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.507 [2024-12-05 23:57:47.096743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.096860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.507 [2024-12-05 23:57:47.096913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.676 ms 00:20:14.507 [2024-12-05 23:57:47.096936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.507 [2024-12-05 23:57:47.097082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.097110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:14.507 [2024-12-05 23:57:47.097131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:14.507 [2024-12-05 23:57:47.097150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.507 [2024-12-05 23:57:47.140394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.140518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.507 [2024-12-05 23:57:47.140583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.212 ms 00:20:14.507 [2024-12-05 23:57:47.140607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.507 [2024-12-05 23:57:47.140708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.140737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.507 [2024-12-05 23:57:47.140802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:14.507 [2024-12-05 23:57:47.140825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.507 [2024-12-05 23:57:47.141167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.141210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.507 [2024-12-05 23:57:47.141239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:20:14.507 [2024-12-05 23:57:47.141340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.507 [2024-12-05 23:57:47.141483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.141852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.507 [2024-12-05 23:57:47.141927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:14.507 [2024-12-05 23:57:47.141955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.507 [2024-12-05 23:57:47.155254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.155358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.507 [2024-12-05 23:57:47.155409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.226 ms 00:20:14.507 [2024-12-05 23:57:47.155432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.507 [2024-12-05 23:57:47.168161] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:14.507 [2024-12-05 23:57:47.168285] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.507 [2024-12-05 23:57:47.168345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.168367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.507 [2024-12-05 23:57:47.168387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.801 ms 00:20:14.507 [2024-12-05 23:57:47.168405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.507 [2024-12-05 23:57:47.192999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.193127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.507 [2024-12-05 23:57:47.193187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.516 ms 00:20:14.507 [2024-12-05 23:57:47.193213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.507 [2024-12-05 23:57:47.205623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.507 [2024-12-05 23:57:47.205732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.507 [2024-12-05 23:57:47.205781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.326 ms 00:20:14.507 [2024-12-05 23:57:47.205803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.218204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.766 [2024-12-05 23:57:47.218328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.766 [2024-12-05 23:57:47.218382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.976 ms 00:20:14.766 [2024-12-05 23:57:47.218404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.219321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.766 [2024-12-05 23:57:47.219487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.766 [2024-12-05 23:57:47.219547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:20:14.766 [2024-12-05 23:57:47.219571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.274890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.766 [2024-12-05 23:57:47.275074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.766 [2024-12-05 23:57:47.275129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.278 ms 00:20:14.766 [2024-12-05 23:57:47.275152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.285590] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:14.766 [2024-12-05 23:57:47.299185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.766 [2024-12-05 23:57:47.299302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.766 [2024-12-05 23:57:47.299351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.893 ms 00:20:14.766 [2024-12-05 23:57:47.299377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.299460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.766 [2024-12-05 23:57:47.299487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.766 [2024-12-05 23:57:47.299507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:14.766 [2024-12-05 23:57:47.299525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.299585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.766 [2024-12-05 23:57:47.299607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.766 [2024-12-05 23:57:47.299734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:14.766 [2024-12-05 23:57:47.299762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.299806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.766 [2024-12-05 23:57:47.299828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.766 [2024-12-05 23:57:47.299848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:14.766 [2024-12-05 23:57:47.299866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.299945] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.766 [2024-12-05 23:57:47.299993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.766 [2024-12-05 23:57:47.300015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.766 [2024-12-05 23:57:47.300035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:14.766 [2024-12-05 23:57:47.300054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.323940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.766 [2024-12-05 23:57:47.324060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.766 [2024-12-05 23:57:47.324111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.853 ms 00:20:14.766 [2024-12-05 23:57:47.324136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.324227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.766 [2024-12-05 23:57:47.324461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.766 [2024-12-05 23:57:47.324493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:14.766 [2024-12-05 23:57:47.324513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.766 [2024-12-05 23:57:47.325661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.766 [2024-12-05 23:57:47.328859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 282.642 ms, result 0 00:20:14.766 [2024-12-05 23:57:47.329842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.766 [2024-12-05 23:57:47.342835] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.026  [2024-12-05T23:57:47.735Z] Copying: 4096/4096 [kB] (average 21 MBps)[2024-12-05 23:57:47.528353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.026 [2024-12-05 23:57:47.536701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.536734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.026 [2024-12-05 23:57:47.536750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.026 [2024-12-05 23:57:47.536758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.536777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.026 [2024-12-05 23:57:47.539357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.539473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.026 [2024-12-05 23:57:47.539488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:20:15.026 [2024-12-05 23:57:47.539496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.542056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.542084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.026 [2024-12-05 23:57:47.542093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:20:15.026 [2024-12-05 23:57:47.542101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.546614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.546720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.026 [2024-12-05 23:57:47.546735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.495 ms 00:20:15.026 [2024-12-05 23:57:47.546743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.553635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.553734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.026 [2024-12-05 23:57:47.553748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.854 ms 00:20:15.026 [2024-12-05 23:57:47.553756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.577271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.577303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.026 [2024-12-05 23:57:47.577314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.473 ms 00:20:15.026 [2024-12-05 23:57:47.577321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.591597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.591638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.026 [2024-12-05 23:57:47.591649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.227 ms 00:20:15.026 [2024-12-05 23:57:47.591657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.591789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.591798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.026 [2024-12-05 23:57:47.591812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:15.026 [2024-12-05 23:57:47.591820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.615359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.615472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.026 [2024-12-05 23:57:47.615487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.523 ms 00:20:15.026 [2024-12-05 23:57:47.615493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.638955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.638991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.026 [2024-12-05 23:57:47.639002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.432 ms 00:20:15.026 [2024-12-05 23:57:47.639010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.661586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.661617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.026 [2024-12-05 23:57:47.661626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.543 ms 00:20:15.026 [2024-12-05 23:57:47.661633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.684167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.026 [2024-12-05 23:57:47.684196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.026 [2024-12-05 23:57:47.684206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.478 ms 00:20:15.026 [2024-12-05 23:57:47.684214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.026 [2024-12-05 23:57:47.684270] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.026 [2024-12-05 23:57:47.684284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.026 [2024-12-05 23:57:47.684525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.684993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.685001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.685008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.685016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.685023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.685030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.027 [2024-12-05 23:57:47.685045] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.027 [2024-12-05 23:57:47.685053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc09c2db-6e6c-420a-8d4b-8435768d4837 00:20:15.027 [2024-12-05 23:57:47.685061] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.027 [2024-12-05 23:57:47.685068] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.027 [2024-12-05 23:57:47.685074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.027 [2024-12-05 23:57:47.685082] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.027 [2024-12-05 23:57:47.685089] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.027 [2024-12-05 23:57:47.685096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.027 [2024-12-05 23:57:47.685123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.027 [2024-12-05 23:57:47.685129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.027 [2024-12-05 23:57:47.685136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.027 [2024-12-05 23:57:47.685142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.027 [2024-12-05 23:57:47.685150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.027 [2024-12-05 23:57:47.685158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:20:15.027 [2024-12-05 23:57:47.685165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.027 [2024-12-05 23:57:47.697069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.027 [2024-12-05 23:57:47.697097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.027 [2024-12-05 23:57:47.697106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.878 ms 00:20:15.027 [2024-12-05 23:57:47.697114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.027 [2024-12-05 23:57:47.697461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.027 [2024-12-05 23:57:47.697470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.027 [2024-12-05 23:57:47.697478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:20:15.027 [2024-12-05 23:57:47.697485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.285 [2024-12-05 23:57:47.732059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.285 [2024-12-05 23:57:47.732195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.285 [2024-12-05 23:57:47.732212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.285 [2024-12-05 23:57:47.732225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.285 [2024-12-05 23:57:47.732304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.285 [2024-12-05 23:57:47.732314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.285 [2024-12-05 23:57:47.732322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.285 [2024-12-05 23:57:47.732331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.285 [2024-12-05 23:57:47.732370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.285 [2024-12-05 23:57:47.732380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.285 [2024-12-05 23:57:47.732389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.285 [2024-12-05 23:57:47.732397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.285 [2024-12-05 23:57:47.732417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.285 [2024-12-05 23:57:47.732425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.285 [2024-12-05 23:57:47.732434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.285 [2024-12-05 23:57:47.732441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.285 [2024-12-05 23:57:47.808508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.285 [2024-12-05 23:57:47.808552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.285 [2024-12-05 23:57:47.808563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.285 [2024-12-05 23:57:47.808575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.285 [2024-12-05 23:57:47.870802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.285 [2024-12-05 23:57:47.870843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.285 [2024-12-05 23:57:47.870853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.285 [2024-12-05 23:57:47.870861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.285 [2024-12-05 23:57:47.870910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.285 [2024-12-05 23:57:47.870922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.285 [2024-12-05 23:57:47.870934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.285 [2024-12-05 23:57:47.870941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.285 [2024-12-05 23:57:47.870989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.286 [2024-12-05 23:57:47.871002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.286 [2024-12-05 23:57:47.871010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.286 [2024-12-05 23:57:47.871017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.286 [2024-12-05 23:57:47.871105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.286 [2024-12-05 23:57:47.871114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.286 [2024-12-05 23:57:47.871122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.286 [2024-12-05 23:57:47.871129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.286 [2024-12-05 23:57:47.871172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.286 [2024-12-05 23:57:47.871182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.286 [2024-12-05 23:57:47.871192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.286 [2024-12-05 23:57:47.871199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.286 [2024-12-05 23:57:47.871233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.286 [2024-12-05 23:57:47.871242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.286 [2024-12-05 23:57:47.871250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.286 [2024-12-05 23:57:47.871257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.286 [2024-12-05 23:57:47.871297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.286 [2024-12-05 23:57:47.871309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.286 [2024-12-05 23:57:47.871317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.286 [2024-12-05 23:57:47.871324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.286 [2024-12-05 23:57:47.871452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.740 ms, result 0 00:20:15.852 00:20:15.852 00:20:16.111 23:57:48 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76990 00:20:16.111 23:57:48 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:16.111 23:57:48 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76990 00:20:16.111 23:57:48 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76990 ']' 00:20:16.111 23:57:48 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.111 23:57:48 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.111 23:57:48 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.111 23:57:48 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.111 23:57:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:16.111 [2024-12-05 23:57:48.643593] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:20:16.111 [2024-12-05 23:57:48.643889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76990 ] 00:20:16.111 [2024-12-05 23:57:48.806142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.369 [2024-12-05 23:57:48.902364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.935 23:57:49 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.935 23:57:49 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:16.935 23:57:49 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:17.195 [2024-12-05 23:57:49.696526] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.195 [2024-12-05 23:57:49.696597] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.195 [2024-12-05 23:57:49.874874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.195 [2024-12-05 23:57:49.874937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.195 [2024-12-05 23:57:49.874956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:17.195 [2024-12-05 23:57:49.874985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.195 [2024-12-05 23:57:49.878008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.195 [2024-12-05 23:57:49.878060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.195 [2024-12-05 23:57:49.878075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:20:17.195 [2024-12-05 23:57:49.878084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.195 [2024-12-05 23:57:49.878210] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.195 [2024-12-05 23:57:49.879103] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.195 [2024-12-05 23:57:49.879154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.195 [2024-12-05 23:57:49.879164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.195 [2024-12-05 23:57:49.879177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:20:17.195 [2024-12-05 23:57:49.879186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.195 [2024-12-05 23:57:49.881054] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.195 [2024-12-05 23:57:49.895480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.195 [2024-12-05 23:57:49.895536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.195 [2024-12-05 23:57:49.895551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.431 ms 00:20:17.195 [2024-12-05 23:57:49.895563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.195 [2024-12-05 23:57:49.895682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.195 [2024-12-05 23:57:49.895701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.195 [2024-12-05 23:57:49.895712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:17.195 [2024-12-05 23:57:49.895722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.456 [2024-12-05 23:57:49.904187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.456 [2024-12-05 23:57:49.904247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.456 [2024-12-05 23:57:49.904259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.411 ms 00:20:17.456 [2024-12-05 23:57:49.904269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.456 [2024-12-05 23:57:49.904388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.456 [2024-12-05 23:57:49.904401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.456 [2024-12-05 23:57:49.904410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:17.456 [2024-12-05 23:57:49.904424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.456 [2024-12-05 23:57:49.904449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.456 [2024-12-05 23:57:49.904458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.456 [2024-12-05 23:57:49.904467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.456 [2024-12-05 23:57:49.904476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.456 [2024-12-05 23:57:49.904500] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.456 [2024-12-05 23:57:49.908633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.456 [2024-12-05 23:57:49.908836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.456 [2024-12-05 23:57:49.908862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.137 ms 00:20:17.456 [2024-12-05 23:57:49.908870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.456 [2024-12-05 23:57:49.908957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.456 [2024-12-05 23:57:49.908990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.456 [2024-12-05 23:57:49.909003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:17.456 [2024-12-05 23:57:49.909013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.456 [2024-12-05 23:57:49.909037] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.456 [2024-12-05 23:57:49.909061] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.456 [2024-12-05 23:57:49.909109] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.456 [2024-12-05 23:57:49.909125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.456 [2024-12-05 23:57:49.909234] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.456 [2024-12-05 23:57:49.909246] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.456 [2024-12-05 23:57:49.909263] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.456 [2024-12-05 23:57:49.909273] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.456 [2024-12-05 23:57:49.909284] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.456 [2024-12-05 23:57:49.909293] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.456 [2024-12-05 23:57:49.909303] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.456 [2024-12-05 23:57:49.909311] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.456 [2024-12-05 23:57:49.909323] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.456 [2024-12-05 23:57:49.909331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.456 [2024-12-05 23:57:49.909341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.456 [2024-12-05 23:57:49.909349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:20:17.456 [2024-12-05 23:57:49.909358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.456 [2024-12-05 23:57:49.909447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.456 [2024-12-05 23:57:49.909457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.456 [2024-12-05 23:57:49.909466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:17.456 [2024-12-05 23:57:49.909475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.456 [2024-12-05 23:57:49.909575] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.456 [2024-12-05 23:57:49.909587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.456 [2024-12-05 23:57:49.909595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.456 [2024-12-05 23:57:49.909606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.456 [2024-12-05 23:57:49.909614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.456 [2024-12-05 23:57:49.909624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.456 [2024-12-05 23:57:49.909631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.456 [2024-12-05 23:57:49.909642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.456 [2024-12-05 23:57:49.909650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.456 [2024-12-05 23:57:49.909659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.456 [2024-12-05 23:57:49.909666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.456 [2024-12-05 23:57:49.909675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.456 [2024-12-05 23:57:49.909682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.456 [2024-12-05 23:57:49.909691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.456 [2024-12-05 23:57:49.909698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.456 [2024-12-05 23:57:49.909706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.456 [2024-12-05 23:57:49.909714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.456 [2024-12-05 23:57:49.909723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.456 [2024-12-05 23:57:49.909737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.456 [2024-12-05 23:57:49.909747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.456 [2024-12-05 23:57:49.909754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.456 [2024-12-05 23:57:49.909763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.456 [2024-12-05 23:57:49.909769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.456 [2024-12-05 23:57:49.909780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.456 [2024-12-05 23:57:49.909787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.456 [2024-12-05 23:57:49.909796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.456 [2024-12-05 23:57:49.909802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.456 [2024-12-05 23:57:49.909811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.456 [2024-12-05 23:57:49.909818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.456 [2024-12-05 23:57:49.909828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.456 [2024-12-05 23:57:49.909835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.456 [2024-12-05 23:57:49.909843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.456 [2024-12-05 23:57:49.909850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.456 [2024-12-05 23:57:49.909859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.457 [2024-12-05 23:57:49.909866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.457 [2024-12-05 23:57:49.909874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.457 [2024-12-05 23:57:49.909881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.457 [2024-12-05 23:57:49.909889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.457 [2024-12-05 23:57:49.909895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.457 [2024-12-05 23:57:49.909907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.457 [2024-12-05 23:57:49.909913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.457 [2024-12-05 23:57:49.909922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.457 [2024-12-05 23:57:49.909929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.457 [2024-12-05 23:57:49.909939] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.457 [2024-12-05 23:57:49.909949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.457 [2024-12-05 23:57:49.909958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.457 [2024-12-05 23:57:49.909979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.457 [2024-12-05 23:57:49.909989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.457 [2024-12-05 23:57:49.909996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.457 [2024-12-05 23:57:49.910005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.457 [2024-12-05 23:57:49.910013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.457 [2024-12-05 23:57:49.910023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.457 [2024-12-05 23:57:49.910030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.457 [2024-12-05 23:57:49.910040] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.457 [2024-12-05 23:57:49.910050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.457 [2024-12-05 23:57:49.910064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.457 [2024-12-05 23:57:49.910072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.457 [2024-12-05 23:57:49.910081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.457 [2024-12-05 23:57:49.910089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.457 [2024-12-05 23:57:49.910098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.457 [2024-12-05 23:57:49.910106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.457 [2024-12-05 23:57:49.910115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.457 [2024-12-05 23:57:49.910122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.457 [2024-12-05 23:57:49.910132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.457 [2024-12-05 23:57:49.910140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.457 [2024-12-05 23:57:49.910149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.457 [2024-12-05 23:57:49.910155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.457 [2024-12-05 23:57:49.910164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.457 [2024-12-05 23:57:49.910171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.457 [2024-12-05 23:57:49.910181] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.457 [2024-12-05 23:57:49.910189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.457 [2024-12-05 23:57:49.910202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.457 [2024-12-05 23:57:49.910210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.457 [2024-12-05 23:57:49.910220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.457 [2024-12-05 23:57:49.910227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.457 [2024-12-05 23:57:49.910237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:49.910244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.457 [2024-12-05 23:57:49.910254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:20:17.457 [2024-12-05 23:57:49.910264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:49.943135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:49.943325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.457 [2024-12-05 23:57:49.943398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.809 ms 00:20:17.457 [2024-12-05 23:57:49.943426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:49.943580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:49.943664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.457 [2024-12-05 23:57:49.943693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:17.457 [2024-12-05 23:57:49.943713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:49.979572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:49.979774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.457 [2024-12-05 23:57:49.979857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.388 ms 00:20:17.457 [2024-12-05 23:57:49.979884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:49.980026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:49.980058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.457 [2024-12-05 23:57:49.980083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:17.457 [2024-12-05 23:57:49.980103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:49.980774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:49.980943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.457 [2024-12-05 23:57:49.981555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:20:17.457 [2024-12-05 23:57:49.981610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:49.981849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:49.981878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.457 [2024-12-05 23:57:49.981903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:20:17.457 [2024-12-05 23:57:49.981922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:50.000273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:50.000454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.457 [2024-12-05 23:57:50.000912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.311 ms 00:20:17.457 [2024-12-05 23:57:50.001004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:50.026643] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:17.457 [2024-12-05 23:57:50.026863] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.457 [2024-12-05 23:57:50.026950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:50.026995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.457 [2024-12-05 23:57:50.027023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.665 ms 00:20:17.457 [2024-12-05 23:57:50.027052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:50.062253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:50.062466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.457 [2024-12-05 23:57:50.062545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.082 ms 00:20:17.457 [2024-12-05 23:57:50.062558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:50.075946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:50.076010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.457 [2024-12-05 23:57:50.076031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.279 ms 00:20:17.457 [2024-12-05 23:57:50.076040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:50.089054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:50.089101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.457 [2024-12-05 23:57:50.089117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.916 ms 00:20:17.457 [2024-12-05 23:57:50.089126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:50.089846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:50.089872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.457 [2024-12-05 23:57:50.089885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:20:17.457 [2024-12-05 23:57:50.089893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.457 [2024-12-05 23:57:50.157543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.457 [2024-12-05 23:57:50.157615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.457 [2024-12-05 23:57:50.157635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.617 ms 00:20:17.457 [2024-12-05 23:57:50.157645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.716 [2024-12-05 23:57:50.168999] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:17.716 [2024-12-05 23:57:50.188732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.716 [2024-12-05 23:57:50.188797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.716 [2024-12-05 23:57:50.188815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.970 ms 00:20:17.716 [2024-12-05 23:57:50.188825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.716 [2024-12-05 23:57:50.188924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.716 [2024-12-05 23:57:50.188938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.716 [2024-12-05 23:57:50.188947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:17.716 [2024-12-05 23:57:50.188957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.716 [2024-12-05 23:57:50.189052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.716 [2024-12-05 23:57:50.189066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.716 [2024-12-05 23:57:50.189074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:17.716 [2024-12-05 23:57:50.189087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.716 [2024-12-05 23:57:50.189113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.716 [2024-12-05 23:57:50.189124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.716 [2024-12-05 23:57:50.189133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:17.716 [2024-12-05 23:57:50.189146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.716 [2024-12-05 23:57:50.189206] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.716 [2024-12-05 23:57:50.189221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.716 [2024-12-05 23:57:50.189233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.716 [2024-12-05 23:57:50.189244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:17.716 [2024-12-05 23:57:50.189252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.716 [2024-12-05 23:57:50.215656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.716 [2024-12-05 23:57:50.215848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.716 [2024-12-05 23:57:50.215876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.368 ms 00:20:17.716 [2024-12-05 23:57:50.215885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.716 [2024-12-05 23:57:50.216045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.716 [2024-12-05 23:57:50.216059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.716 [2024-12-05 23:57:50.216071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:17.716 [2024-12-05 23:57:50.216082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.716 [2024-12-05 23:57:50.217643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.716 [2024-12-05 23:57:50.221057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 342.435 ms, result 0 00:20:17.716 [2024-12-05 23:57:50.223523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.716 Some configs were skipped because the RPC state that can call them passed over. 00:20:17.716 23:57:50 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:17.973 [2024-12-05 23:57:50.468653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.973 [2024-12-05 23:57:50.468728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:17.973 [2024-12-05 23:57:50.468744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.496 ms 00:20:17.973 [2024-12-05 23:57:50.468756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.973 [2024-12-05 23:57:50.468793] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.644 ms, result 0 00:20:17.973 true 00:20:17.973 23:57:50 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:17.973 [2024-12-05 23:57:50.671913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.973 [2024-12-05 23:57:50.671980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:17.973 [2024-12-05 23:57:50.671995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:20:17.973 [2024-12-05 23:57:50.672002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.973 [2024-12-05 23:57:50.672039] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.663 ms, result 0 00:20:17.973 true 00:20:18.233 23:57:50 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76990 00:20:18.233 23:57:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76990 ']' 00:20:18.233 23:57:50 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76990 00:20:18.233 23:57:50 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:18.233 23:57:50 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.233 23:57:50 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76990 00:20:18.233 killing process with pid 76990 00:20:18.233 23:57:50 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:18.233 23:57:50 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:18.233 23:57:50 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76990' 00:20:18.233 23:57:50 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76990 00:20:18.234 23:57:50 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76990 00:20:18.855 [2024-12-05 23:57:51.460466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.460542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:18.855 [2024-12-05 23:57:51.460558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:18.855 [2024-12-05 23:57:51.460568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.460597] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:18.855 [2024-12-05 23:57:51.463749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.463952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:18.855 [2024-12-05 23:57:51.464000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.129 ms 00:20:18.855 [2024-12-05 23:57:51.464010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.464354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.464366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:18.855 [2024-12-05 23:57:51.464379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:20:18.855 [2024-12-05 23:57:51.464387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.469062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.469107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:18.855 [2024-12-05 23:57:51.469123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.648 ms 00:20:18.855 [2024-12-05 23:57:51.469131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.476158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.476204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:18.855 [2024-12-05 23:57:51.476223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.975 ms 00:20:18.855 [2024-12-05 23:57:51.476232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.488099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.488156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:18.855 [2024-12-05 23:57:51.488173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.777 ms 00:20:18.855 [2024-12-05 23:57:51.488180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.497925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.497992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:18.855 [2024-12-05 23:57:51.498007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.688 ms 00:20:18.855 [2024-12-05 23:57:51.498016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.498179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.498190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:18.855 [2024-12-05 23:57:51.498203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:18.855 [2024-12-05 23:57:51.498211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.510154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.510199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:18.855 [2024-12-05 23:57:51.510213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.916 ms 00:20:18.855 [2024-12-05 23:57:51.510220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.521616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.521798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:18.855 [2024-12-05 23:57:51.521830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.341 ms 00:20:18.855 [2024-12-05 23:57:51.521837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.532602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.532864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:18.855 [2024-12-05 23:57:51.532895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.712 ms 00:20:18.855 [2024-12-05 23:57:51.532903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.543535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.855 [2024-12-05 23:57:51.543697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:18.855 [2024-12-05 23:57:51.543722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.529 ms 00:20:18.855 [2024-12-05 23:57:51.543730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.855 [2024-12-05 23:57:51.543773] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:18.855 [2024-12-05 23:57:51.543790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.543997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.544008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.544016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.544026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.544034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:18.855 [2024-12-05 23:57:51.544043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:18.856 [2024-12-05 23:57:51.544757] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:18.856 [2024-12-05 23:57:51.544771] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc09c2db-6e6c-420a-8d4b-8435768d4837 00:20:18.856 [2024-12-05 23:57:51.544782] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:18.856 [2024-12-05 23:57:51.544792] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:18.856 [2024-12-05 23:57:51.544799] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:18.856 [2024-12-05 23:57:51.544809] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:18.856 [2024-12-05 23:57:51.544817] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:18.856 [2024-12-05 23:57:51.544827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:18.856 [2024-12-05 23:57:51.544835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:18.856 [2024-12-05 23:57:51.544844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:18.856 [2024-12-05 23:57:51.544851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:18.856 [2024-12-05 23:57:51.544860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.856 [2024-12-05 23:57:51.544868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:18.856 [2024-12-05 23:57:51.544879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:20:18.856 [2024-12-05 23:57:51.544888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.856 [2024-12-05 23:57:51.558854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.856 [2024-12-05 23:57:51.559037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:18.856 [2024-12-05 23:57:51.559073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.919 ms 00:20:18.856 [2024-12-05 23:57:51.559082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.857 [2024-12-05 23:57:51.559527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.857 [2024-12-05 23:57:51.559540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:18.857 [2024-12-05 23:57:51.559555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:20:18.857 [2024-12-05 23:57:51.559562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.608907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.608959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.119 [2024-12-05 23:57:51.608991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.609002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.609109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.609120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.119 [2024-12-05 23:57:51.609134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.609142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.609195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.609206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.119 [2024-12-05 23:57:51.609219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.609226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.609246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.609254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.119 [2024-12-05 23:57:51.609265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.609275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.695047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.695109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.119 [2024-12-05 23:57:51.695127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.695136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.765897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.765959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.119 [2024-12-05 23:57:51.766003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.766015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.766098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.766109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.119 [2024-12-05 23:57:51.766123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.766132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.766183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.766193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.119 [2024-12-05 23:57:51.766203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.766211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.766317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.766328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.119 [2024-12-05 23:57:51.766339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.766348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.766384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.766394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:19.119 [2024-12-05 23:57:51.766404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.766412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.766460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.766471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.119 [2024-12-05 23:57:51.766483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.766491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.766541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.119 [2024-12-05 23:57:51.766553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.119 [2024-12-05 23:57:51.766563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.119 [2024-12-05 23:57:51.766572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.119 [2024-12-05 23:57:51.766731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 306.253 ms, result 0 00:20:20.060 23:57:52 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:20.060 [2024-12-05 23:57:52.586540] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:20:20.060 [2024-12-05 23:57:52.586701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77044 ] 00:20:20.060 [2024-12-05 23:57:52.750305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.319 [2024-12-05 23:57:52.881188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.580 [2024-12-05 23:57:53.174340] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.580 [2024-12-05 23:57:53.174403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.841 [2024-12-05 23:57:53.334244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.334298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:20.841 [2024-12-05 23:57:53.334311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.841 [2024-12-05 23:57:53.334320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.337090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.337130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.841 [2024-12-05 23:57:53.337140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:20:20.841 [2024-12-05 23:57:53.337148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.337229] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:20.841 [2024-12-05 23:57:53.337955] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:20.841 [2024-12-05 23:57:53.337992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.338001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.841 [2024-12-05 23:57:53.338009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:20:20.841 [2024-12-05 23:57:53.338017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.339321] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:20.841 [2024-12-05 23:57:53.352445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.352485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:20.841 [2024-12-05 23:57:53.352498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.124 ms 00:20:20.841 [2024-12-05 23:57:53.352506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.352597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.352608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:20.841 [2024-12-05 23:57:53.352617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:20.841 [2024-12-05 23:57:53.352624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.358227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.358264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.841 [2024-12-05 23:57:53.358273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.562 ms 00:20:20.841 [2024-12-05 23:57:53.358281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.358368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.358378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.841 [2024-12-05 23:57:53.358386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:20.841 [2024-12-05 23:57:53.358393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.358420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.358428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:20.841 [2024-12-05 23:57:53.358436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:20.841 [2024-12-05 23:57:53.358443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.358463] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:20.841 [2024-12-05 23:57:53.361827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.361859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.841 [2024-12-05 23:57:53.361868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.369 ms 00:20:20.841 [2024-12-05 23:57:53.361875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.361913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.361922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:20.841 [2024-12-05 23:57:53.361930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:20.841 [2024-12-05 23:57:53.361937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.361957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:20.841 [2024-12-05 23:57:53.361986] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:20.841 [2024-12-05 23:57:53.362021] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:20.841 [2024-12-05 23:57:53.362035] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:20.841 [2024-12-05 23:57:53.362137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:20.841 [2024-12-05 23:57:53.362148] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:20.841 [2024-12-05 23:57:53.362158] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:20.841 [2024-12-05 23:57:53.362170] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:20.841 [2024-12-05 23:57:53.362179] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:20.841 [2024-12-05 23:57:53.362187] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:20.841 [2024-12-05 23:57:53.362194] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:20.841 [2024-12-05 23:57:53.362202] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:20.841 [2024-12-05 23:57:53.362209] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:20.841 [2024-12-05 23:57:53.362217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.362224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:20.841 [2024-12-05 23:57:53.362231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:20:20.841 [2024-12-05 23:57:53.362238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.362325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.841 [2024-12-05 23:57:53.362335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:20.841 [2024-12-05 23:57:53.362343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:20.841 [2024-12-05 23:57:53.362349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.841 [2024-12-05 23:57:53.362463] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:20.842 [2024-12-05 23:57:53.362481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:20.842 [2024-12-05 23:57:53.362489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.842 [2024-12-05 23:57:53.362497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:20.842 [2024-12-05 23:57:53.362512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:20.842 [2024-12-05 23:57:53.362525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:20.842 [2024-12-05 23:57:53.362534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.842 [2024-12-05 23:57:53.362548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:20.842 [2024-12-05 23:57:53.362561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:20.842 [2024-12-05 23:57:53.362567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.842 [2024-12-05 23:57:53.362574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:20.842 [2024-12-05 23:57:53.362581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:20.842 [2024-12-05 23:57:53.362588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:20.842 [2024-12-05 23:57:53.362601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:20.842 [2024-12-05 23:57:53.362608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:20.842 [2024-12-05 23:57:53.362621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.842 [2024-12-05 23:57:53.362633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:20.842 [2024-12-05 23:57:53.362639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.842 [2024-12-05 23:57:53.362652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:20.842 [2024-12-05 23:57:53.362658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.842 [2024-12-05 23:57:53.362670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:20.842 [2024-12-05 23:57:53.362676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.842 [2024-12-05 23:57:53.362688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:20.842 [2024-12-05 23:57:53.362694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.842 [2024-12-05 23:57:53.362707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:20.842 [2024-12-05 23:57:53.362713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:20.842 [2024-12-05 23:57:53.362719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.842 [2024-12-05 23:57:53.362726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:20.842 [2024-12-05 23:57:53.362732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:20.842 [2024-12-05 23:57:53.362738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:20.842 [2024-12-05 23:57:53.362751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:20.842 [2024-12-05 23:57:53.362759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362766] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:20.842 [2024-12-05 23:57:53.362773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:20.842 [2024-12-05 23:57:53.362782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.842 [2024-12-05 23:57:53.362789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.842 [2024-12-05 23:57:53.362797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:20.842 [2024-12-05 23:57:53.362803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:20.842 [2024-12-05 23:57:53.362809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:20.842 [2024-12-05 23:57:53.362816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:20.842 [2024-12-05 23:57:53.362822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:20.842 [2024-12-05 23:57:53.362829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:20.842 [2024-12-05 23:57:53.362836] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:20.842 [2024-12-05 23:57:53.362845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.842 [2024-12-05 23:57:53.362854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:20.842 [2024-12-05 23:57:53.362861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:20.842 [2024-12-05 23:57:53.362868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:20.842 [2024-12-05 23:57:53.362875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:20.842 [2024-12-05 23:57:53.362881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:20.842 [2024-12-05 23:57:53.362888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:20.842 [2024-12-05 23:57:53.362895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:20.842 [2024-12-05 23:57:53.362902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:20.842 [2024-12-05 23:57:53.362910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:20.842 [2024-12-05 23:57:53.362917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:20.842 [2024-12-05 23:57:53.362924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:20.842 [2024-12-05 23:57:53.362931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:20.842 [2024-12-05 23:57:53.362938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:20.842 [2024-12-05 23:57:53.362945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:20.842 [2024-12-05 23:57:53.362952] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:20.842 [2024-12-05 23:57:53.362961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.842 [2024-12-05 23:57:53.362991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:20.842 [2024-12-05 23:57:53.363003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:20.842 [2024-12-05 23:57:53.363013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:20.842 [2024-12-05 23:57:53.363021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:20.842 [2024-12-05 23:57:53.363029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.363038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:20.843 [2024-12-05 23:57:53.363046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.634 ms 00:20:20.843 [2024-12-05 23:57:53.363053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.389518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.389554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.843 [2024-12-05 23:57:53.389564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.396 ms 00:20:20.843 [2024-12-05 23:57:53.389571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.389689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.389698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:20.843 [2024-12-05 23:57:53.389707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:20.843 [2024-12-05 23:57:53.389714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.434210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.434251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.843 [2024-12-05 23:57:53.434265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.474 ms 00:20:20.843 [2024-12-05 23:57:53.434274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.434362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.434374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.843 [2024-12-05 23:57:53.434383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:20.843 [2024-12-05 23:57:53.434390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.434744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.434770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.843 [2024-12-05 23:57:53.434785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:20:20.843 [2024-12-05 23:57:53.434793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.434923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.434932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.843 [2024-12-05 23:57:53.434940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:20:20.843 [2024-12-05 23:57:53.434948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.448887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.448923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.843 [2024-12-05 23:57:53.448933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.904 ms 00:20:20.843 [2024-12-05 23:57:53.448941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.462189] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:20.843 [2024-12-05 23:57:53.462226] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:20.843 [2024-12-05 23:57:53.462238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.462246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:20.843 [2024-12-05 23:57:53.462255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.186 ms 00:20:20.843 [2024-12-05 23:57:53.462263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.487157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.487198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:20.843 [2024-12-05 23:57:53.487210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.819 ms 00:20:20.843 [2024-12-05 23:57:53.487219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.499713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.499753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:20.843 [2024-12-05 23:57:53.499763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.418 ms 00:20:20.843 [2024-12-05 23:57:53.499770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.512031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.512068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:20.843 [2024-12-05 23:57:53.512078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.189 ms 00:20:20.843 [2024-12-05 23:57:53.512086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.843 [2024-12-05 23:57:53.512729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.843 [2024-12-05 23:57:53.512757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:20.843 [2024-12-05 23:57:53.512768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:20:20.843 [2024-12-05 23:57:53.512775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.105 [2024-12-05 23:57:53.573163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.105 [2024-12-05 23:57:53.573218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:21.105 [2024-12-05 23:57:53.573232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.362 ms 00:20:21.105 [2024-12-05 23:57:53.573240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.105 [2024-12-05 23:57:53.583918] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:21.105 [2024-12-05 23:57:53.598682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.105 [2024-12-05 23:57:53.598722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:21.105 [2024-12-05 23:57:53.598735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.342 ms 00:20:21.105 [2024-12-05 23:57:53.598748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.105 [2024-12-05 23:57:53.598825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.105 [2024-12-05 23:57:53.598836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:21.105 [2024-12-05 23:57:53.598844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:21.105 [2024-12-05 23:57:53.598852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.105 [2024-12-05 23:57:53.598896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.105 [2024-12-05 23:57:53.598905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:21.105 [2024-12-05 23:57:53.598914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:21.105 [2024-12-05 23:57:53.598924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.105 [2024-12-05 23:57:53.598956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.105 [2024-12-05 23:57:53.598985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:21.105 [2024-12-05 23:57:53.598994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:21.105 [2024-12-05 23:57:53.599001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.105 [2024-12-05 23:57:53.599031] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:21.105 [2024-12-05 23:57:53.599041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.105 [2024-12-05 23:57:53.599049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:21.105 [2024-12-05 23:57:53.599057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:21.105 [2024-12-05 23:57:53.599064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.105 [2024-12-05 23:57:53.623141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.105 [2024-12-05 23:57:53.623181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:21.105 [2024-12-05 23:57:53.623194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.055 ms 00:20:21.105 [2024-12-05 23:57:53.623203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.105 [2024-12-05 23:57:53.623289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.105 [2024-12-05 23:57:53.623300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:21.105 [2024-12-05 23:57:53.623308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:21.105 [2024-12-05 23:57:53.623316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.105 [2024-12-05 23:57:53.624159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:21.105 [2024-12-05 23:57:53.627199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 289.631 ms, result 0 00:20:21.105 [2024-12-05 23:57:53.628336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:21.106 [2024-12-05 23:57:53.641221] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.044  [2024-12-05T23:57:55.698Z] Copying: 13/256 [MB] (13 MBps) [2024-12-05T23:57:57.082Z] Copying: 23/256 [MB] (10 MBps) [2024-12-05T23:57:58.024Z] Copying: 34516/262144 [kB] (10012 kBps) [2024-12-05T23:57:58.968Z] Copying: 44392/262144 [kB] (9876 kBps) [2024-12-05T23:57:59.912Z] Copying: 54504/262144 [kB] (10112 kBps) [2024-12-05T23:58:00.958Z] Copying: 63/256 [MB] (10 MBps) [2024-12-05T23:58:01.893Z] Copying: 75/256 [MB] (11 MBps) [2024-12-05T23:58:02.829Z] Copying: 87/256 [MB] (11 MBps) [2024-12-05T23:58:03.762Z] Copying: 99/256 [MB] (11 MBps) [2024-12-05T23:58:05.136Z] Copying: 110/256 [MB] (11 MBps) [2024-12-05T23:58:05.703Z] Copying: 121/256 [MB] (11 MBps) [2024-12-05T23:58:07.081Z] Copying: 132/256 [MB] (10 MBps) [2024-12-05T23:58:08.016Z] Copying: 145528/262144 [kB] (10020 kBps) [2024-12-05T23:58:08.952Z] Copying: 154/256 [MB] (12 MBps) [2024-12-05T23:58:09.888Z] Copying: 166/256 [MB] (11 MBps) [2024-12-05T23:58:10.822Z] Copying: 177/256 [MB] (11 MBps) [2024-12-05T23:58:11.751Z] Copying: 189/256 [MB] (11 MBps) [2024-12-05T23:58:13.128Z] Copying: 201/256 [MB] (11 MBps) [2024-12-05T23:58:13.738Z] Copying: 212/256 [MB] (10 MBps) [2024-12-05T23:58:15.118Z] Copying: 226872/262144 [kB] (9680 kBps) [2024-12-05T23:58:16.053Z] Copying: 237112/262144 [kB] (10240 kBps) [2024-12-05T23:58:16.992Z] Copying: 241/256 [MB] (10 MBps) [2024-12-05T23:58:17.252Z] Copying: 252/256 [MB] (10 MBps) [2024-12-05T23:58:17.820Z] Copying: 256/256 [MB] (average 10 MBps)[2024-12-05 23:58:17.508268] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:45.112 [2024-12-05 23:58:17.518725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.518782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:45.112 [2024-12-05 23:58:17.518806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:45.112 [2024-12-05 23:58:17.518816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.518843] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:45.112 [2024-12-05 23:58:17.521892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.521939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:45.112 [2024-12-05 23:58:17.521952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:20:45.112 [2024-12-05 23:58:17.521962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.522285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.522297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:45.112 [2024-12-05 23:58:17.522308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:45.112 [2024-12-05 23:58:17.522317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.526027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.526061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:45.112 [2024-12-05 23:58:17.526072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:20:45.112 [2024-12-05 23:58:17.526081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.533119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.533164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:45.112 [2024-12-05 23:58:17.533175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.019 ms 00:20:45.112 [2024-12-05 23:58:17.533183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.559911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.559987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:45.112 [2024-12-05 23:58:17.560003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.655 ms 00:20:45.112 [2024-12-05 23:58:17.560012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.576268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.576321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:45.112 [2024-12-05 23:58:17.576343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.185 ms 00:20:45.112 [2024-12-05 23:58:17.576353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.576522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.576535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:45.112 [2024-12-05 23:58:17.576554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:45.112 [2024-12-05 23:58:17.576562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.603075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.603126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:45.112 [2024-12-05 23:58:17.603139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.494 ms 00:20:45.112 [2024-12-05 23:58:17.603147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.628944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.629010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:45.112 [2024-12-05 23:58:17.629024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.717 ms 00:20:45.112 [2024-12-05 23:58:17.629032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.654902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.654975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:45.112 [2024-12-05 23:58:17.654989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.803 ms 00:20:45.112 [2024-12-05 23:58:17.654998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.681962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.112 [2024-12-05 23:58:17.682033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:45.112 [2024-12-05 23:58:17.682046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.846 ms 00:20:45.112 [2024-12-05 23:58:17.682054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.112 [2024-12-05 23:58:17.682119] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:45.112 [2024-12-05 23:58:17.682136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:45.112 [2024-12-05 23:58:17.682512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:45.113 [2024-12-05 23:58:17.682980] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:45.113 [2024-12-05 23:58:17.682989] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc09c2db-6e6c-420a-8d4b-8435768d4837 00:20:45.113 [2024-12-05 23:58:17.682999] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:45.113 [2024-12-05 23:58:17.683007] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:45.113 [2024-12-05 23:58:17.683015] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:45.113 [2024-12-05 23:58:17.683023] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:45.113 [2024-12-05 23:58:17.683031] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:45.113 [2024-12-05 23:58:17.683039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:45.113 [2024-12-05 23:58:17.683050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:45.113 [2024-12-05 23:58:17.683056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:45.113 [2024-12-05 23:58:17.683063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:45.113 [2024-12-05 23:58:17.683070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.113 [2024-12-05 23:58:17.683078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:45.113 [2024-12-05 23:58:17.683089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:20:45.113 [2024-12-05 23:58:17.683097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.113 [2024-12-05 23:58:17.697432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.113 [2024-12-05 23:58:17.697480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:45.113 [2024-12-05 23:58:17.697492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.313 ms 00:20:45.113 [2024-12-05 23:58:17.697500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.113 [2024-12-05 23:58:17.697910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.113 [2024-12-05 23:58:17.697929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:45.113 [2024-12-05 23:58:17.697940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:20:45.113 [2024-12-05 23:58:17.697948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.113 [2024-12-05 23:58:17.737042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.113 [2024-12-05 23:58:17.737082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.113 [2024-12-05 23:58:17.737094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.113 [2024-12-05 23:58:17.737108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.113 [2024-12-05 23:58:17.737212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.113 [2024-12-05 23:58:17.737223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.113 [2024-12-05 23:58:17.737231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.113 [2024-12-05 23:58:17.737238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.113 [2024-12-05 23:58:17.737285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.113 [2024-12-05 23:58:17.737294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.113 [2024-12-05 23:58:17.737303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.113 [2024-12-05 23:58:17.737310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.113 [2024-12-05 23:58:17.737331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.113 [2024-12-05 23:58:17.737339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.113 [2024-12-05 23:58:17.737346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.113 [2024-12-05 23:58:17.737354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.375 [2024-12-05 23:58:17.821058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.375 [2024-12-05 23:58:17.821124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.375 [2024-12-05 23:58:17.821137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.375 [2024-12-05 23:58:17.821146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.375 [2024-12-05 23:58:17.889263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.375 [2024-12-05 23:58:17.889322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.375 [2024-12-05 23:58:17.889336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.375 [2024-12-05 23:58:17.889346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.375 [2024-12-05 23:58:17.889447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.375 [2024-12-05 23:58:17.889458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.375 [2024-12-05 23:58:17.889467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.375 [2024-12-05 23:58:17.889476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.375 [2024-12-05 23:58:17.889508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.375 [2024-12-05 23:58:17.889521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.375 [2024-12-05 23:58:17.889529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.375 [2024-12-05 23:58:17.889538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.375 [2024-12-05 23:58:17.889641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.375 [2024-12-05 23:58:17.889652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.375 [2024-12-05 23:58:17.889661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.375 [2024-12-05 23:58:17.889669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.375 [2024-12-05 23:58:17.889703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.375 [2024-12-05 23:58:17.889713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:45.375 [2024-12-05 23:58:17.889725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.375 [2024-12-05 23:58:17.889733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.375 [2024-12-05 23:58:17.889777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.375 [2024-12-05 23:58:17.889786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.375 [2024-12-05 23:58:17.889795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.375 [2024-12-05 23:58:17.889803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.375 [2024-12-05 23:58:17.889852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.375 [2024-12-05 23:58:17.889866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.375 [2024-12-05 23:58:17.889875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.375 [2024-12-05 23:58:17.889883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.375 [2024-12-05 23:58:17.890073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.344 ms, result 0 00:20:45.949 00:20:45.949 00:20:46.210 23:58:18 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:46.783 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:46.783 23:58:19 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:46.783 23:58:19 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:46.783 23:58:19 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:46.783 23:58:19 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:46.783 23:58:19 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:46.783 23:58:19 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:46.783 23:58:19 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76990 00:20:46.783 23:58:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76990 ']' 00:20:46.783 23:58:19 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76990 00:20:46.783 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76990) - No such process 00:20:46.783 Process with pid 76990 is not found 00:20:46.783 23:58:19 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76990 is not found' 00:20:46.783 ************************************ 00:20:46.783 END TEST ftl_trim 00:20:46.783 ************************************ 00:20:46.783 00:20:46.783 real 1m28.369s 00:20:46.783 user 1m56.982s 00:20:46.783 sys 0m5.605s 00:20:46.783 23:58:19 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.783 23:58:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:46.783 23:58:19 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:46.783 23:58:19 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:46.783 23:58:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.783 23:58:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:46.784 ************************************ 00:20:46.784 START TEST ftl_restore 00:20:46.784 ************************************ 00:20:46.784 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:46.784 * Looking for test storage... 00:20:46.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:46.784 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:46.784 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:46.784 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:47.045 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.045 23:58:19 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:47.045 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.045 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:47.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.045 --rc genhtml_branch_coverage=1 00:20:47.045 --rc genhtml_function_coverage=1 00:20:47.045 --rc genhtml_legend=1 00:20:47.045 --rc geninfo_all_blocks=1 00:20:47.045 --rc geninfo_unexecuted_blocks=1 00:20:47.045 00:20:47.045 ' 00:20:47.045 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:47.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.045 --rc genhtml_branch_coverage=1 00:20:47.045 --rc genhtml_function_coverage=1 00:20:47.045 --rc genhtml_legend=1 00:20:47.045 --rc geninfo_all_blocks=1 00:20:47.045 --rc geninfo_unexecuted_blocks=1 00:20:47.045 00:20:47.045 ' 00:20:47.045 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:47.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.045 --rc genhtml_branch_coverage=1 00:20:47.045 --rc genhtml_function_coverage=1 00:20:47.045 --rc genhtml_legend=1 00:20:47.045 --rc geninfo_all_blocks=1 00:20:47.045 --rc geninfo_unexecuted_blocks=1 00:20:47.045 00:20:47.045 ' 00:20:47.045 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:47.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.045 --rc genhtml_branch_coverage=1 00:20:47.045 --rc genhtml_function_coverage=1 00:20:47.045 --rc genhtml_legend=1 00:20:47.045 --rc geninfo_all_blocks=1 00:20:47.045 --rc geninfo_unexecuted_blocks=1 00:20:47.045 00:20:47.045 ' 00:20:47.045 23:58:19 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:47.045 23:58:19 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.7jOEgi22DI 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77385 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77385 00:20:47.046 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77385 ']' 00:20:47.046 23:58:19 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.046 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.046 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.046 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.046 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.046 23:58:19 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:47.046 [2024-12-05 23:58:19.667368] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:20:47.046 [2024-12-05 23:58:19.667757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77385 ] 00:20:47.307 [2024-12-05 23:58:19.833661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.307 [2024-12-05 23:58:19.961073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.354 23:58:20 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.354 23:58:20 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:48.354 23:58:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:48.354 23:58:20 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:48.354 23:58:20 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:48.354 23:58:20 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:48.354 23:58:20 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:48.354 23:58:20 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:48.354 23:58:20 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:48.354 23:58:20 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:48.354 23:58:20 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:48.354 23:58:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:48.354 23:58:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:48.354 23:58:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:48.354 23:58:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:48.354 23:58:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:48.616 23:58:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:48.616 { 00:20:48.616 "name": "nvme0n1", 00:20:48.616 "aliases": [ 00:20:48.616 "6056555d-02c0-46ce-b536-7a528fdce2af" 00:20:48.616 ], 00:20:48.616 "product_name": "NVMe disk", 00:20:48.616 "block_size": 4096, 00:20:48.616 "num_blocks": 1310720, 00:20:48.616 "uuid": "6056555d-02c0-46ce-b536-7a528fdce2af", 00:20:48.616 "numa_id": -1, 00:20:48.616 "assigned_rate_limits": { 00:20:48.616 "rw_ios_per_sec": 0, 00:20:48.616 "rw_mbytes_per_sec": 0, 00:20:48.616 "r_mbytes_per_sec": 0, 00:20:48.616 "w_mbytes_per_sec": 0 00:20:48.616 }, 00:20:48.616 "claimed": true, 00:20:48.616 "claim_type": "read_many_write_one", 00:20:48.616 "zoned": false, 00:20:48.616 "supported_io_types": { 00:20:48.616 "read": true, 00:20:48.616 "write": true, 00:20:48.616 "unmap": true, 00:20:48.616 "flush": true, 00:20:48.616 "reset": true, 00:20:48.616 "nvme_admin": true, 00:20:48.616 "nvme_io": true, 00:20:48.616 "nvme_io_md": false, 00:20:48.616 "write_zeroes": true, 00:20:48.616 "zcopy": false, 00:20:48.616 "get_zone_info": false, 00:20:48.616 "zone_management": false, 00:20:48.616 "zone_append": false, 00:20:48.616 "compare": true, 00:20:48.616 "compare_and_write": false, 00:20:48.616 "abort": true, 00:20:48.616 "seek_hole": false, 00:20:48.616 "seek_data": false, 00:20:48.616 "copy": true, 00:20:48.616 "nvme_iov_md": false 00:20:48.616 }, 00:20:48.616 "driver_specific": { 00:20:48.616 "nvme": [ 00:20:48.616 { 00:20:48.616 "pci_address": "0000:00:11.0", 00:20:48.616 "trid": { 00:20:48.616 "trtype": "PCIe", 00:20:48.616 "traddr": "0000:00:11.0" 00:20:48.616 }, 00:20:48.616 "ctrlr_data": { 00:20:48.616 "cntlid": 0, 00:20:48.616 "vendor_id": "0x1b36", 00:20:48.616 "model_number": "QEMU NVMe Ctrl", 00:20:48.616 "serial_number": "12341", 00:20:48.616 "firmware_revision": "8.0.0", 00:20:48.616 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:48.616 "oacs": { 00:20:48.616 "security": 0, 00:20:48.616 "format": 1, 00:20:48.616 "firmware": 0, 00:20:48.616 "ns_manage": 1 00:20:48.616 }, 00:20:48.616 "multi_ctrlr": false, 00:20:48.616 "ana_reporting": false 00:20:48.616 }, 00:20:48.616 "vs": { 00:20:48.616 "nvme_version": "1.4" 00:20:48.616 }, 00:20:48.617 "ns_data": { 00:20:48.617 "id": 1, 00:20:48.617 "can_share": false 00:20:48.617 } 00:20:48.617 } 00:20:48.617 ], 00:20:48.617 "mp_policy": "active_passive" 00:20:48.617 } 00:20:48.617 } 00:20:48.617 ]' 00:20:48.617 23:58:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:48.617 23:58:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:48.617 23:58:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:48.617 23:58:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:48.617 23:58:21 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:48.617 23:58:21 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:48.617 23:58:21 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:48.617 23:58:21 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:48.617 23:58:21 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:48.617 23:58:21 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:48.617 23:58:21 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:48.878 23:58:21 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=8fd10eea-cb27-4552-a559-222bb5fe7130 00:20:48.878 23:58:21 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:48.878 23:58:21 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fd10eea-cb27-4552-a559-222bb5fe7130 00:20:49.138 23:58:21 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:49.396 23:58:21 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=26beb0a2-0f49-45a2-975d-948f66becf95 00:20:49.396 23:58:21 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 26beb0a2-0f49-45a2-975d-948f66becf95 00:20:49.655 23:58:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:49.655 23:58:22 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:49.655 23:58:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:49.655 23:58:22 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:49.655 23:58:22 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:49.655 23:58:22 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:49.655 23:58:22 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:49.655 23:58:22 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:49.655 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:49.655 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:49.655 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:49.655 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:49.655 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:49.655 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:49.655 { 00:20:49.655 "name": "be9a6e0e-4cf0-40fb-8789-f0b11159576b", 00:20:49.655 "aliases": [ 00:20:49.655 "lvs/nvme0n1p0" 00:20:49.655 ], 00:20:49.655 "product_name": "Logical Volume", 00:20:49.655 "block_size": 4096, 00:20:49.655 "num_blocks": 26476544, 00:20:49.655 "uuid": "be9a6e0e-4cf0-40fb-8789-f0b11159576b", 00:20:49.655 "assigned_rate_limits": { 00:20:49.655 "rw_ios_per_sec": 0, 00:20:49.655 "rw_mbytes_per_sec": 0, 00:20:49.655 "r_mbytes_per_sec": 0, 00:20:49.655 "w_mbytes_per_sec": 0 00:20:49.655 }, 00:20:49.655 "claimed": false, 00:20:49.655 "zoned": false, 00:20:49.655 "supported_io_types": { 00:20:49.655 "read": true, 00:20:49.655 "write": true, 00:20:49.655 "unmap": true, 00:20:49.655 "flush": false, 00:20:49.655 "reset": true, 00:20:49.655 "nvme_admin": false, 00:20:49.655 "nvme_io": false, 00:20:49.655 "nvme_io_md": false, 00:20:49.655 "write_zeroes": true, 00:20:49.655 "zcopy": false, 00:20:49.655 "get_zone_info": false, 00:20:49.655 "zone_management": false, 00:20:49.655 "zone_append": false, 00:20:49.655 "compare": false, 00:20:49.655 "compare_and_write": false, 00:20:49.655 "abort": false, 00:20:49.655 "seek_hole": true, 00:20:49.655 "seek_data": true, 00:20:49.655 "copy": false, 00:20:49.655 "nvme_iov_md": false 00:20:49.655 }, 00:20:49.655 "driver_specific": { 00:20:49.655 "lvol": { 00:20:49.655 "lvol_store_uuid": "26beb0a2-0f49-45a2-975d-948f66becf95", 00:20:49.655 "base_bdev": "nvme0n1", 00:20:49.655 "thin_provision": true, 00:20:49.655 "num_allocated_clusters": 0, 00:20:49.655 "snapshot": false, 00:20:49.655 "clone": false, 00:20:49.655 "esnap_clone": false 00:20:49.655 } 00:20:49.655 } 00:20:49.655 } 00:20:49.655 ]' 00:20:49.655 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:49.655 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:49.655 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:49.915 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:49.915 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:49.915 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:49.915 23:58:22 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:49.915 23:58:22 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:49.915 23:58:22 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:50.174 23:58:22 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:50.174 23:58:22 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:50.174 23:58:22 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:50.174 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:50.174 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:50.175 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:50.175 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:50.175 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:50.175 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:50.175 { 00:20:50.175 "name": "be9a6e0e-4cf0-40fb-8789-f0b11159576b", 00:20:50.175 "aliases": [ 00:20:50.175 "lvs/nvme0n1p0" 00:20:50.175 ], 00:20:50.175 "product_name": "Logical Volume", 00:20:50.175 "block_size": 4096, 00:20:50.175 "num_blocks": 26476544, 00:20:50.175 "uuid": "be9a6e0e-4cf0-40fb-8789-f0b11159576b", 00:20:50.175 "assigned_rate_limits": { 00:20:50.175 "rw_ios_per_sec": 0, 00:20:50.175 "rw_mbytes_per_sec": 0, 00:20:50.175 "r_mbytes_per_sec": 0, 00:20:50.175 "w_mbytes_per_sec": 0 00:20:50.175 }, 00:20:50.175 "claimed": false, 00:20:50.175 "zoned": false, 00:20:50.175 "supported_io_types": { 00:20:50.175 "read": true, 00:20:50.175 "write": true, 00:20:50.175 "unmap": true, 00:20:50.175 "flush": false, 00:20:50.175 "reset": true, 00:20:50.175 "nvme_admin": false, 00:20:50.175 "nvme_io": false, 00:20:50.175 "nvme_io_md": false, 00:20:50.175 "write_zeroes": true, 00:20:50.175 "zcopy": false, 00:20:50.175 "get_zone_info": false, 00:20:50.175 "zone_management": false, 00:20:50.175 "zone_append": false, 00:20:50.175 "compare": false, 00:20:50.175 "compare_and_write": false, 00:20:50.175 "abort": false, 00:20:50.175 "seek_hole": true, 00:20:50.175 "seek_data": true, 00:20:50.175 "copy": false, 00:20:50.175 "nvme_iov_md": false 00:20:50.175 }, 00:20:50.175 "driver_specific": { 00:20:50.175 "lvol": { 00:20:50.175 "lvol_store_uuid": "26beb0a2-0f49-45a2-975d-948f66becf95", 00:20:50.175 "base_bdev": "nvme0n1", 00:20:50.175 "thin_provision": true, 00:20:50.175 "num_allocated_clusters": 0, 00:20:50.175 "snapshot": false, 00:20:50.175 "clone": false, 00:20:50.175 "esnap_clone": false 00:20:50.175 } 00:20:50.175 } 00:20:50.175 } 00:20:50.175 ]' 00:20:50.175 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:50.175 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:50.435 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:50.435 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:50.435 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:50.435 23:58:22 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:50.435 23:58:22 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:50.435 23:58:22 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:50.435 23:58:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:50.435 23:58:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:50.435 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:50.435 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:50.435 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:50.435 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:50.435 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b be9a6e0e-4cf0-40fb-8789-f0b11159576b 00:20:50.696 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:50.696 { 00:20:50.696 "name": "be9a6e0e-4cf0-40fb-8789-f0b11159576b", 00:20:50.696 "aliases": [ 00:20:50.696 "lvs/nvme0n1p0" 00:20:50.696 ], 00:20:50.696 "product_name": "Logical Volume", 00:20:50.696 "block_size": 4096, 00:20:50.696 "num_blocks": 26476544, 00:20:50.696 "uuid": "be9a6e0e-4cf0-40fb-8789-f0b11159576b", 00:20:50.696 "assigned_rate_limits": { 00:20:50.696 "rw_ios_per_sec": 0, 00:20:50.696 "rw_mbytes_per_sec": 0, 00:20:50.696 "r_mbytes_per_sec": 0, 00:20:50.696 "w_mbytes_per_sec": 0 00:20:50.696 }, 00:20:50.696 "claimed": false, 00:20:50.696 "zoned": false, 00:20:50.696 "supported_io_types": { 00:20:50.696 "read": true, 00:20:50.696 "write": true, 00:20:50.696 "unmap": true, 00:20:50.696 "flush": false, 00:20:50.696 "reset": true, 00:20:50.696 "nvme_admin": false, 00:20:50.696 "nvme_io": false, 00:20:50.696 "nvme_io_md": false, 00:20:50.696 "write_zeroes": true, 00:20:50.696 "zcopy": false, 00:20:50.696 "get_zone_info": false, 00:20:50.696 "zone_management": false, 00:20:50.696 "zone_append": false, 00:20:50.696 "compare": false, 00:20:50.696 "compare_and_write": false, 00:20:50.696 "abort": false, 00:20:50.696 "seek_hole": true, 00:20:50.696 "seek_data": true, 00:20:50.696 "copy": false, 00:20:50.696 "nvme_iov_md": false 00:20:50.696 }, 00:20:50.696 "driver_specific": { 00:20:50.696 "lvol": { 00:20:50.696 "lvol_store_uuid": "26beb0a2-0f49-45a2-975d-948f66becf95", 00:20:50.696 "base_bdev": "nvme0n1", 00:20:50.696 "thin_provision": true, 00:20:50.696 "num_allocated_clusters": 0, 00:20:50.696 "snapshot": false, 00:20:50.696 "clone": false, 00:20:50.696 "esnap_clone": false 00:20:50.696 } 00:20:50.696 } 00:20:50.696 } 00:20:50.696 ]' 00:20:50.696 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:50.696 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:50.696 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:50.956 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:50.956 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:50.956 23:58:23 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:50.956 23:58:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:50.956 23:58:23 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d be9a6e0e-4cf0-40fb-8789-f0b11159576b --l2p_dram_limit 10' 00:20:50.956 23:58:23 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:50.956 23:58:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:50.956 23:58:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:50.956 23:58:23 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:50.956 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:50.956 23:58:23 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d be9a6e0e-4cf0-40fb-8789-f0b11159576b --l2p_dram_limit 10 -c nvc0n1p0 00:20:50.956 [2024-12-05 23:58:23.612052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.612118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:50.956 [2024-12-05 23:58:23.612138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:50.956 [2024-12-05 23:58:23.612148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.612239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.612251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.956 [2024-12-05 23:58:23.612262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:50.956 [2024-12-05 23:58:23.612271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.612300] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:50.956 [2024-12-05 23:58:23.613167] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:50.956 [2024-12-05 23:58:23.613208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.613217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.956 [2024-12-05 23:58:23.613234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:20:50.956 [2024-12-05 23:58:23.613242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.613325] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5d1b3a65-8ffa-42fb-a989-b29d7582f516 00:20:50.956 [2024-12-05 23:58:23.615089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.615139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:50.956 [2024-12-05 23:58:23.615152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:50.956 [2024-12-05 23:58:23.615168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.624081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.624133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.956 [2024-12-05 23:58:23.624144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.851 ms 00:20:50.956 [2024-12-05 23:58:23.624155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.624273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.624286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.956 [2024-12-05 23:58:23.624296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:50.956 [2024-12-05 23:58:23.624310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.624364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.624376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:50.956 [2024-12-05 23:58:23.624387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:50.956 [2024-12-05 23:58:23.624397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.624421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.956 [2024-12-05 23:58:23.628943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.628999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.956 [2024-12-05 23:58:23.629014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.527 ms 00:20:50.956 [2024-12-05 23:58:23.629022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.629066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.629075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:50.956 [2024-12-05 23:58:23.629086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:50.956 [2024-12-05 23:58:23.629094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.629147] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:50.956 [2024-12-05 23:58:23.629300] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:50.956 [2024-12-05 23:58:23.629317] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:50.956 [2024-12-05 23:58:23.629329] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:50.956 [2024-12-05 23:58:23.629342] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:50.956 [2024-12-05 23:58:23.629359] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:50.956 [2024-12-05 23:58:23.629371] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:50.956 [2024-12-05 23:58:23.629380] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:50.956 [2024-12-05 23:58:23.629393] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:50.956 [2024-12-05 23:58:23.629401] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:50.956 [2024-12-05 23:58:23.629412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.629434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:50.956 [2024-12-05 23:58:23.629444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:20:50.956 [2024-12-05 23:58:23.629454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.629543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.956 [2024-12-05 23:58:23.629558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:50.956 [2024-12-05 23:58:23.629569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:50.956 [2024-12-05 23:58:23.629576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.956 [2024-12-05 23:58:23.629686] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:50.956 [2024-12-05 23:58:23.629697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:50.956 [2024-12-05 23:58:23.629708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.956 [2024-12-05 23:58:23.629716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.956 [2024-12-05 23:58:23.629726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:50.956 [2024-12-05 23:58:23.629734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:50.956 [2024-12-05 23:58:23.629743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:50.956 [2024-12-05 23:58:23.629749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:50.956 [2024-12-05 23:58:23.629759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:50.956 [2024-12-05 23:58:23.629766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.956 [2024-12-05 23:58:23.629777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:50.956 [2024-12-05 23:58:23.629784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:50.956 [2024-12-05 23:58:23.629794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.956 [2024-12-05 23:58:23.629801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:50.956 [2024-12-05 23:58:23.629810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:50.956 [2024-12-05 23:58:23.629817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.956 [2024-12-05 23:58:23.629828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:50.956 [2024-12-05 23:58:23.629835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:50.956 [2024-12-05 23:58:23.629843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.956 [2024-12-05 23:58:23.629850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:50.956 [2024-12-05 23:58:23.629859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:50.956 [2024-12-05 23:58:23.629868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.956 [2024-12-05 23:58:23.629877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:50.956 [2024-12-05 23:58:23.629884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:50.956 [2024-12-05 23:58:23.629893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.956 [2024-12-05 23:58:23.629899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:50.956 [2024-12-05 23:58:23.629909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:50.956 [2024-12-05 23:58:23.629916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.956 [2024-12-05 23:58:23.629924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:50.956 [2024-12-05 23:58:23.629931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:50.956 [2024-12-05 23:58:23.629939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.956 [2024-12-05 23:58:23.629946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:50.956 [2024-12-05 23:58:23.629956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:50.956 [2024-12-05 23:58:23.629977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.956 [2024-12-05 23:58:23.629986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:50.956 [2024-12-05 23:58:23.629993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:50.956 [2024-12-05 23:58:23.630003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.956 [2024-12-05 23:58:23.630009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:50.956 [2024-12-05 23:58:23.630019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:50.956 [2024-12-05 23:58:23.630025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.956 [2024-12-05 23:58:23.630034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:50.956 [2024-12-05 23:58:23.630040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:50.956 [2024-12-05 23:58:23.630048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.956 [2024-12-05 23:58:23.630055] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:50.956 [2024-12-05 23:58:23.630065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:50.956 [2024-12-05 23:58:23.630073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.956 [2024-12-05 23:58:23.630082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.956 [2024-12-05 23:58:23.630090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:50.956 [2024-12-05 23:58:23.630101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:50.956 [2024-12-05 23:58:23.630107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:50.956 [2024-12-05 23:58:23.630117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:50.956 [2024-12-05 23:58:23.630124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:50.956 [2024-12-05 23:58:23.630133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:50.956 [2024-12-05 23:58:23.630144] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:50.956 [2024-12-05 23:58:23.630166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.956 [2024-12-05 23:58:23.630176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:50.956 [2024-12-05 23:58:23.630186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:50.956 [2024-12-05 23:58:23.630193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:50.956 [2024-12-05 23:58:23.630203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:50.956 [2024-12-05 23:58:23.630212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:50.957 [2024-12-05 23:58:23.630223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:50.957 [2024-12-05 23:58:23.630230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:50.957 [2024-12-05 23:58:23.630241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:50.957 [2024-12-05 23:58:23.630249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:50.957 [2024-12-05 23:58:23.630260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:50.957 [2024-12-05 23:58:23.630268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:50.957 [2024-12-05 23:58:23.630277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:50.957 [2024-12-05 23:58:23.630284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:50.957 [2024-12-05 23:58:23.630293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:50.957 [2024-12-05 23:58:23.630300] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:50.957 [2024-12-05 23:58:23.630310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.957 [2024-12-05 23:58:23.630319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:50.957 [2024-12-05 23:58:23.630328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:50.957 [2024-12-05 23:58:23.630336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:50.957 [2024-12-05 23:58:23.630345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:50.957 [2024-12-05 23:58:23.630353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.957 [2024-12-05 23:58:23.630363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:50.957 [2024-12-05 23:58:23.630370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:20:50.957 [2024-12-05 23:58:23.630380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.957 [2024-12-05 23:58:23.630420] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:50.957 [2024-12-05 23:58:23.630442] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:55.159 [2024-12-05 23:58:27.673570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.159 [2024-12-05 23:58:27.673657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:55.159 [2024-12-05 23:58:27.673675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4043.134 ms 00:20:55.159 [2024-12-05 23:58:27.673687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.159 [2024-12-05 23:58:27.704675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.159 [2024-12-05 23:58:27.704739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.159 [2024-12-05 23:58:27.704755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.746 ms 00:20:55.159 [2024-12-05 23:58:27.704765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.159 [2024-12-05 23:58:27.704908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.159 [2024-12-05 23:58:27.704923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:55.159 [2024-12-05 23:58:27.704932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:55.159 [2024-12-05 23:58:27.704948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.159 [2024-12-05 23:58:27.739988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.159 [2024-12-05 23:58:27.740043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.159 [2024-12-05 23:58:27.740056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.972 ms 00:20:55.159 [2024-12-05 23:58:27.740067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.159 [2024-12-05 23:58:27.740105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.159 [2024-12-05 23:58:27.740119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.159 [2024-12-05 23:58:27.740128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:55.159 [2024-12-05 23:58:27.740146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.159 [2024-12-05 23:58:27.740764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.159 [2024-12-05 23:58:27.740793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.159 [2024-12-05 23:58:27.740804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:20:55.159 [2024-12-05 23:58:27.740814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.159 [2024-12-05 23:58:27.740934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.159 [2024-12-05 23:58:27.740947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.159 [2024-12-05 23:58:27.740959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:55.159 [2024-12-05 23:58:27.741007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.159 [2024-12-05 23:58:27.758843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.159 [2024-12-05 23:58:27.759085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.159 [2024-12-05 23:58:27.759107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.816 ms 00:20:55.159 [2024-12-05 23:58:27.759118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.159 [2024-12-05 23:58:27.792352] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:55.159 [2024-12-05 23:58:27.796222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.159 [2024-12-05 23:58:27.796295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.159 [2024-12-05 23:58:27.796312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.007 ms 00:20:55.159 [2024-12-05 23:58:27.796321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.421 [2024-12-05 23:58:27.910533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.421 [2024-12-05 23:58:27.910762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:55.421 [2024-12-05 23:58:27.910794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.157 ms 00:20:55.421 [2024-12-05 23:58:27.910804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.421 [2024-12-05 23:58:27.911105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.421 [2024-12-05 23:58:27.911123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.421 [2024-12-05 23:58:27.911138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:20:55.421 [2024-12-05 23:58:27.911147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.421 [2024-12-05 23:58:27.937450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.421 [2024-12-05 23:58:27.937647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:55.421 [2024-12-05 23:58:27.937676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.239 ms 00:20:55.421 [2024-12-05 23:58:27.937685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.421 [2024-12-05 23:58:27.963580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.421 [2024-12-05 23:58:27.963631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:55.421 [2024-12-05 23:58:27.963649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.786 ms 00:20:55.421 [2024-12-05 23:58:27.963657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.421 [2024-12-05 23:58:27.964357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.421 [2024-12-05 23:58:27.964419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.421 [2024-12-05 23:58:27.964438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:20:55.421 [2024-12-05 23:58:27.964451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.421 [2024-12-05 23:58:28.058055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.421 [2024-12-05 23:58:28.058107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:55.421 [2024-12-05 23:58:28.058126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.537 ms 00:20:55.421 [2024-12-05 23:58:28.058135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.421 [2024-12-05 23:58:28.086708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.421 [2024-12-05 23:58:28.086760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:55.421 [2024-12-05 23:58:28.086778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.466 ms 00:20:55.421 [2024-12-05 23:58:28.086787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.421 [2024-12-05 23:58:28.113975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.421 [2024-12-05 23:58:28.114168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:55.421 [2024-12-05 23:58:28.114195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.120 ms 00:20:55.421 [2024-12-05 23:58:28.114203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.682 [2024-12-05 23:58:28.141283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.682 [2024-12-05 23:58:28.141472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.682 [2024-12-05 23:58:28.141500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.991 ms 00:20:55.682 [2024-12-05 23:58:28.141509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.682 [2024-12-05 23:58:28.141626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.682 [2024-12-05 23:58:28.141637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.682 [2024-12-05 23:58:28.141653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:55.682 [2024-12-05 23:58:28.141661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.682 [2024-12-05 23:58:28.141785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.682 [2024-12-05 23:58:28.141799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.682 [2024-12-05 23:58:28.141811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:55.682 [2024-12-05 23:58:28.141818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.682 [2024-12-05 23:58:28.143001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4530.439 ms, result 0 00:20:55.682 { 00:20:55.682 "name": "ftl0", 00:20:55.682 "uuid": "5d1b3a65-8ffa-42fb-a989-b29d7582f516" 00:20:55.682 } 00:20:55.682 23:58:28 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:55.682 23:58:28 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:55.944 23:58:28 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:55.944 23:58:28 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:55.944 [2024-12-05 23:58:28.586305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.944 [2024-12-05 23:58:28.586387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:55.944 [2024-12-05 23:58:28.586403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.944 [2024-12-05 23:58:28.586415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.944 [2024-12-05 23:58:28.586441] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:55.944 [2024-12-05 23:58:28.589554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.944 [2024-12-05 23:58:28.589597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:55.944 [2024-12-05 23:58:28.589613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:20:55.944 [2024-12-05 23:58:28.589622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.944 [2024-12-05 23:58:28.589904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.944 [2024-12-05 23:58:28.589919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:55.944 [2024-12-05 23:58:28.589930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:20:55.944 [2024-12-05 23:58:28.589938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.944 [2024-12-05 23:58:28.593211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.944 [2024-12-05 23:58:28.593237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:55.944 [2024-12-05 23:58:28.593249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:20:55.944 [2024-12-05 23:58:28.593257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.944 [2024-12-05 23:58:28.599543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.944 [2024-12-05 23:58:28.599745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:55.944 [2024-12-05 23:58:28.599777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.262 ms 00:20:55.944 [2024-12-05 23:58:28.599785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.944 [2024-12-05 23:58:28.627590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.944 [2024-12-05 23:58:28.627641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:55.944 [2024-12-05 23:58:28.627658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.714 ms 00:20:55.944 [2024-12-05 23:58:28.627666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.944 [2024-12-05 23:58:28.646256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.944 [2024-12-05 23:58:28.646460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:55.944 [2024-12-05 23:58:28.646489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.526 ms 00:20:55.944 [2024-12-05 23:58:28.646499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.944 [2024-12-05 23:58:28.646759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.944 [2024-12-05 23:58:28.646773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:55.944 [2024-12-05 23:58:28.646785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:20:55.944 [2024-12-05 23:58:28.646794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.205 [2024-12-05 23:58:28.673416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.205 [2024-12-05 23:58:28.673603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:56.205 [2024-12-05 23:58:28.673630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.593 ms 00:20:56.205 [2024-12-05 23:58:28.673638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.205 [2024-12-05 23:58:28.699695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.205 [2024-12-05 23:58:28.699743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:56.205 [2024-12-05 23:58:28.699758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.952 ms 00:20:56.205 [2024-12-05 23:58:28.699765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.205 [2024-12-05 23:58:28.724949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.205 [2024-12-05 23:58:28.725007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:56.205 [2024-12-05 23:58:28.725023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.123 ms 00:20:56.205 [2024-12-05 23:58:28.725030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.205 [2024-12-05 23:58:28.749897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.205 [2024-12-05 23:58:28.749946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:56.205 [2024-12-05 23:58:28.749961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.760 ms 00:20:56.205 [2024-12-05 23:58:28.749987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.205 [2024-12-05 23:58:28.750041] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:56.205 [2024-12-05 23:58:28.750057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:56.205 [2024-12-05 23:58:28.750382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:56.206 [2024-12-05 23:58:28.750958] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:56.206 [2024-12-05 23:58:28.750980] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5d1b3a65-8ffa-42fb-a989-b29d7582f516 00:20:56.206 [2024-12-05 23:58:28.750989] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:56.206 [2024-12-05 23:58:28.751001] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:56.206 [2024-12-05 23:58:28.751012] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:56.206 [2024-12-05 23:58:28.751022] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:56.206 [2024-12-05 23:58:28.751030] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:56.206 [2024-12-05 23:58:28.751040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:56.206 [2024-12-05 23:58:28.751048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:56.206 [2024-12-05 23:58:28.751057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:56.206 [2024-12-05 23:58:28.751063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:56.206 [2024-12-05 23:58:28.751073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.206 [2024-12-05 23:58:28.751080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:56.206 [2024-12-05 23:58:28.751091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:20:56.206 [2024-12-05 23:58:28.751101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.206 [2024-12-05 23:58:28.764897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.206 [2024-12-05 23:58:28.764938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:56.206 [2024-12-05 23:58:28.764952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.728 ms 00:20:56.206 [2024-12-05 23:58:28.764960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.206 [2024-12-05 23:58:28.765404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.206 [2024-12-05 23:58:28.765425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:56.206 [2024-12-05 23:58:28.765441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:20:56.206 [2024-12-05 23:58:28.765449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.206 [2024-12-05 23:58:28.812456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.206 [2024-12-05 23:58:28.812523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.206 [2024-12-05 23:58:28.812538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.206 [2024-12-05 23:58:28.812548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.206 [2024-12-05 23:58:28.812625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.206 [2024-12-05 23:58:28.812635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.206 [2024-12-05 23:58:28.812650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.206 [2024-12-05 23:58:28.812657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.206 [2024-12-05 23:58:28.812746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.206 [2024-12-05 23:58:28.812758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.206 [2024-12-05 23:58:28.812768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.207 [2024-12-05 23:58:28.812776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.207 [2024-12-05 23:58:28.812800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.207 [2024-12-05 23:58:28.812808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.207 [2024-12-05 23:58:28.812819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.207 [2024-12-05 23:58:28.812829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.207 [2024-12-05 23:58:28.897567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.207 [2024-12-05 23:58:28.897626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.207 [2024-12-05 23:58:28.897642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.207 [2024-12-05 23:58:28.897652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.467 [2024-12-05 23:58:28.964795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.467 [2024-12-05 23:58:28.964843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.467 [2024-12-05 23:58:28.964856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.467 [2024-12-05 23:58:28.964867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.467 [2024-12-05 23:58:28.964956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.467 [2024-12-05 23:58:28.964981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:56.467 [2024-12-05 23:58:28.964992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.467 [2024-12-05 23:58:28.965000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.467 [2024-12-05 23:58:28.965049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.467 [2024-12-05 23:58:28.965058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:56.467 [2024-12-05 23:58:28.965068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.467 [2024-12-05 23:58:28.965075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.467 [2024-12-05 23:58:28.965175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.467 [2024-12-05 23:58:28.965185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:56.467 [2024-12-05 23:58:28.965195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.467 [2024-12-05 23:58:28.965202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.467 [2024-12-05 23:58:28.965235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.467 [2024-12-05 23:58:28.965244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:56.467 [2024-12-05 23:58:28.965254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.467 [2024-12-05 23:58:28.965261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.467 [2024-12-05 23:58:28.965301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.467 [2024-12-05 23:58:28.965310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:56.467 [2024-12-05 23:58:28.965318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.467 [2024-12-05 23:58:28.965326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.467 [2024-12-05 23:58:28.965371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.467 [2024-12-05 23:58:28.965381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:56.467 [2024-12-05 23:58:28.965390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.467 [2024-12-05 23:58:28.965398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.467 [2024-12-05 23:58:28.965528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.199 ms, result 0 00:20:56.467 true 00:20:56.467 23:58:28 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77385 00:20:56.467 23:58:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77385 ']' 00:20:56.467 23:58:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77385 00:20:56.467 23:58:28 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:56.467 23:58:28 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.467 23:58:28 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77385 00:20:56.467 killing process with pid 77385 00:20:56.467 23:58:29 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.467 23:58:29 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.467 23:58:29 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77385' 00:20:56.467 23:58:29 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77385 00:20:56.467 23:58:29 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77385 00:21:03.054 23:58:35 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:07.298 262144+0 records in 00:21:07.298 262144+0 records out 00:21:07.298 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.3585 s, 246 MB/s 00:21:07.298 23:58:39 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:09.222 23:58:41 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:09.481 [2024-12-05 23:58:41.971770] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:21:09.481 [2024-12-05 23:58:41.972125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77635 ] 00:21:09.481 [2024-12-05 23:58:42.134597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.738 [2024-12-05 23:58:42.228036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.997 [2024-12-05 23:58:42.483792] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:09.997 [2024-12-05 23:58:42.483855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:09.997 [2024-12-05 23:58:42.642320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.997 [2024-12-05 23:58:42.642370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:09.998 [2024-12-05 23:58:42.642388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:09.998 [2024-12-05 23:58:42.642396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.642442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.642455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.998 [2024-12-05 23:58:42.642463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:09.998 [2024-12-05 23:58:42.642471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.642489] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:09.998 [2024-12-05 23:58:42.643221] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:09.998 [2024-12-05 23:58:42.643237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.643245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.998 [2024-12-05 23:58:42.643253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:21:09.998 [2024-12-05 23:58:42.643259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.644294] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:09.998 [2024-12-05 23:58:42.657188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.657325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:09.998 [2024-12-05 23:58:42.657341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.895 ms 00:21:09.998 [2024-12-05 23:58:42.657349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.657402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.657411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:09.998 [2024-12-05 23:58:42.657419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:09.998 [2024-12-05 23:58:42.657427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.662323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.662354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.998 [2024-12-05 23:58:42.662365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.835 ms 00:21:09.998 [2024-12-05 23:58:42.662376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.662445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.662453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.998 [2024-12-05 23:58:42.662461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:09.998 [2024-12-05 23:58:42.662468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.662499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.662508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:09.998 [2024-12-05 23:58:42.662516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:09.998 [2024-12-05 23:58:42.662523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.662542] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:09.998 [2024-12-05 23:58:42.665925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.665952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.998 [2024-12-05 23:58:42.665972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.386 ms 00:21:09.998 [2024-12-05 23:58:42.665981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.666017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.666026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:09.998 [2024-12-05 23:58:42.666035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:09.998 [2024-12-05 23:58:42.666043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.666062] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:09.998 [2024-12-05 23:58:42.666082] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:09.998 [2024-12-05 23:58:42.666117] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:09.998 [2024-12-05 23:58:42.666136] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:09.998 [2024-12-05 23:58:42.666241] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:09.998 [2024-12-05 23:58:42.666251] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:09.998 [2024-12-05 23:58:42.666263] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:09.998 [2024-12-05 23:58:42.666274] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:09.998 [2024-12-05 23:58:42.666284] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:09.998 [2024-12-05 23:58:42.666293] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:09.998 [2024-12-05 23:58:42.666301] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:09.998 [2024-12-05 23:58:42.666311] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:09.998 [2024-12-05 23:58:42.666319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:09.998 [2024-12-05 23:58:42.666327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.666336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:09.998 [2024-12-05 23:58:42.666344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:21:09.998 [2024-12-05 23:58:42.666352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.666436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.998 [2024-12-05 23:58:42.666445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:09.998 [2024-12-05 23:58:42.666453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:09.998 [2024-12-05 23:58:42.666460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.998 [2024-12-05 23:58:42.666577] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:09.998 [2024-12-05 23:58:42.666588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:09.998 [2024-12-05 23:58:42.666597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.998 [2024-12-05 23:58:42.666606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.998 [2024-12-05 23:58:42.666615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:09.998 [2024-12-05 23:58:42.666622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:09.998 [2024-12-05 23:58:42.666630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:09.998 [2024-12-05 23:58:42.666638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:09.998 [2024-12-05 23:58:42.666646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:09.998 [2024-12-05 23:58:42.666653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.998 [2024-12-05 23:58:42.666661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:09.998 [2024-12-05 23:58:42.666669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:09.998 [2024-12-05 23:58:42.666677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.998 [2024-12-05 23:58:42.666690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:09.998 [2024-12-05 23:58:42.666698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:09.998 [2024-12-05 23:58:42.666705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.998 [2024-12-05 23:58:42.666711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:09.998 [2024-12-05 23:58:42.666718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:09.998 [2024-12-05 23:58:42.666725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.998 [2024-12-05 23:58:42.666732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:09.998 [2024-12-05 23:58:42.666738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:09.998 [2024-12-05 23:58:42.666744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.998 [2024-12-05 23:58:42.666751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:09.998 [2024-12-05 23:58:42.666757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:09.998 [2024-12-05 23:58:42.666763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.998 [2024-12-05 23:58:42.666770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:09.998 [2024-12-05 23:58:42.666776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:09.998 [2024-12-05 23:58:42.666782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.998 [2024-12-05 23:58:42.666789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:09.998 [2024-12-05 23:58:42.666795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:09.998 [2024-12-05 23:58:42.666801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.998 [2024-12-05 23:58:42.666807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:09.998 [2024-12-05 23:58:42.666814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:09.999 [2024-12-05 23:58:42.666820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.999 [2024-12-05 23:58:42.666826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:09.999 [2024-12-05 23:58:42.666833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:09.999 [2024-12-05 23:58:42.666839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.999 [2024-12-05 23:58:42.666845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:09.999 [2024-12-05 23:58:42.666851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:09.999 [2024-12-05 23:58:42.666857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.999 [2024-12-05 23:58:42.666863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:09.999 [2024-12-05 23:58:42.666870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:09.999 [2024-12-05 23:58:42.666877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.999 [2024-12-05 23:58:42.666883] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:09.999 [2024-12-05 23:58:42.666891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:09.999 [2024-12-05 23:58:42.666898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.999 [2024-12-05 23:58:42.666905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.999 [2024-12-05 23:58:42.666912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:09.999 [2024-12-05 23:58:42.666920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:09.999 [2024-12-05 23:58:42.666927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:09.999 [2024-12-05 23:58:42.666933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:09.999 [2024-12-05 23:58:42.666939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:09.999 [2024-12-05 23:58:42.666945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:09.999 [2024-12-05 23:58:42.666953] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:09.999 [2024-12-05 23:58:42.666962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.999 [2024-12-05 23:58:42.666983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:09.999 [2024-12-05 23:58:42.666990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:09.999 [2024-12-05 23:58:42.666998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:09.999 [2024-12-05 23:58:42.667005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:09.999 [2024-12-05 23:58:42.667012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:09.999 [2024-12-05 23:58:42.667019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:09.999 [2024-12-05 23:58:42.667026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:09.999 [2024-12-05 23:58:42.667033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:09.999 [2024-12-05 23:58:42.667040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:09.999 [2024-12-05 23:58:42.667047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:09.999 [2024-12-05 23:58:42.667054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:09.999 [2024-12-05 23:58:42.667061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:09.999 [2024-12-05 23:58:42.667071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:09.999 [2024-12-05 23:58:42.667078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:09.999 [2024-12-05 23:58:42.667085] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:09.999 [2024-12-05 23:58:42.667092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.999 [2024-12-05 23:58:42.667100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:09.999 [2024-12-05 23:58:42.667107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:09.999 [2024-12-05 23:58:42.667115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:09.999 [2024-12-05 23:58:42.667122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:09.999 [2024-12-05 23:58:42.667130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.999 [2024-12-05 23:58:42.667137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:09.999 [2024-12-05 23:58:42.667145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:21:09.999 [2024-12-05 23:58:42.667152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.999 [2024-12-05 23:58:42.693253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.999 [2024-12-05 23:58:42.693288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.999 [2024-12-05 23:58:42.693298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.058 ms 00:21:09.999 [2024-12-05 23:58:42.693313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.999 [2024-12-05 23:58:42.693393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.999 [2024-12-05 23:58:42.693402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:09.999 [2024-12-05 23:58:42.693410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:09.999 [2024-12-05 23:58:42.693417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.735208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.735343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:10.258 [2024-12-05 23:58:42.735361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.742 ms 00:21:10.258 [2024-12-05 23:58:42.735369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.735407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.735417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:10.258 [2024-12-05 23:58:42.735429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:10.258 [2024-12-05 23:58:42.735437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.735796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.735811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:10.258 [2024-12-05 23:58:42.735820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:21:10.258 [2024-12-05 23:58:42.735828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.735946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.735955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:10.258 [2024-12-05 23:58:42.735998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:10.258 [2024-12-05 23:58:42.736006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.748987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.749019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:10.258 [2024-12-05 23:58:42.749029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.960 ms 00:21:10.258 [2024-12-05 23:58:42.749036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.761712] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:10.258 [2024-12-05 23:58:42.761748] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:10.258 [2024-12-05 23:58:42.761761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.761770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:10.258 [2024-12-05 23:58:42.761780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.637 ms 00:21:10.258 [2024-12-05 23:58:42.761788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.785958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.785999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:10.258 [2024-12-05 23:58:42.786011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.133 ms 00:21:10.258 [2024-12-05 23:58:42.786018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.797879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.798008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:10.258 [2024-12-05 23:58:42.798024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.824 ms 00:21:10.258 [2024-12-05 23:58:42.798031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.809743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.809853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:10.258 [2024-12-05 23:58:42.809867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.682 ms 00:21:10.258 [2024-12-05 23:58:42.809874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.810466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.810485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:10.258 [2024-12-05 23:58:42.810494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:21:10.258 [2024-12-05 23:58:42.810504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.866327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.866367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:10.258 [2024-12-05 23:58:42.866379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.807 ms 00:21:10.258 [2024-12-05 23:58:42.866391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.877353] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:10.258 [2024-12-05 23:58:42.879598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.879623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:10.258 [2024-12-05 23:58:42.879635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.167 ms 00:21:10.258 [2024-12-05 23:58:42.879643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.879730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.879742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:10.258 [2024-12-05 23:58:42.879752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:10.258 [2024-12-05 23:58:42.879760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.879833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.879844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:10.258 [2024-12-05 23:58:42.879854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:10.258 [2024-12-05 23:58:42.879862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.879882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.879890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:10.258 [2024-12-05 23:58:42.879899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:10.258 [2024-12-05 23:58:42.879908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.879939] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:10.258 [2024-12-05 23:58:42.879951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.879960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:10.258 [2024-12-05 23:58:42.879987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:10.258 [2024-12-05 23:58:42.879996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.903942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.904087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:10.258 [2024-12-05 23:58:42.904105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.927 ms 00:21:10.258 [2024-12-05 23:58:42.904117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.904179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.258 [2024-12-05 23:58:42.904187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:10.258 [2024-12-05 23:58:42.904196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:10.258 [2024-12-05 23:58:42.904203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.258 [2024-12-05 23:58:42.905181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 262.433 ms, result 0 00:21:11.636  [2024-12-05T23:58:45.289Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-05T23:58:46.224Z] Copying: 20848/1048576 [kB] (9612 kBps) [2024-12-05T23:58:47.163Z] Copying: 30536/1048576 [kB] (9688 kBps) [2024-12-05T23:58:48.105Z] Copying: 40/1024 [MB] (10 MBps) [2024-12-05T23:58:49.048Z] Copying: 50712/1048576 [kB] (9616 kBps) [2024-12-05T23:58:49.985Z] Copying: 60072/1048576 [kB] (9360 kBps) [2024-12-05T23:58:50.923Z] Copying: 69760/1048576 [kB] (9688 kBps) [2024-12-05T23:58:52.308Z] Copying: 79896/1048576 [kB] (10136 kBps) [2024-12-05T23:58:53.250Z] Copying: 89468/1048576 [kB] (9572 kBps) [2024-12-05T23:58:54.238Z] Copying: 98/1024 [MB] (11 MBps) [2024-12-05T23:58:55.176Z] Copying: 109/1024 [MB] (10 MBps) [2024-12-05T23:58:56.116Z] Copying: 121748/1048576 [kB] (10124 kBps) [2024-12-05T23:58:57.055Z] Copying: 128/1024 [MB] (10 MBps) [2024-12-05T23:58:57.994Z] Copying: 142124/1048576 [kB] (10068 kBps) [2024-12-05T23:58:58.928Z] Copying: 152208/1048576 [kB] (10084 kBps) [2024-12-05T23:59:00.306Z] Copying: 158/1024 [MB] (10 MBps) [2024-12-05T23:59:01.247Z] Copying: 168/1024 [MB] (10 MBps) [2024-12-05T23:59:02.188Z] Copying: 178/1024 [MB] (10 MBps) [2024-12-05T23:59:03.120Z] Copying: 192736/1048576 [kB] (9532 kBps) [2024-12-05T23:59:04.060Z] Copying: 202892/1048576 [kB] (10156 kBps) [2024-12-05T23:59:04.997Z] Copying: 212976/1048576 [kB] (10084 kBps) [2024-12-05T23:59:05.935Z] Copying: 222828/1048576 [kB] (9852 kBps) [2024-12-05T23:59:07.312Z] Copying: 232232/1048576 [kB] (9404 kBps) [2024-12-05T23:59:08.253Z] Copying: 241920/1048576 [kB] (9688 kBps) [2024-12-05T23:59:09.187Z] Copying: 251788/1048576 [kB] (9868 kBps) [2024-12-05T23:59:10.134Z] Copying: 256/1024 [MB] (10 MBps) [2024-12-05T23:59:11.070Z] Copying: 266/1024 [MB] (10 MBps) [2024-12-05T23:59:12.008Z] Copying: 277/1024 [MB] (10 MBps) [2024-12-05T23:59:12.945Z] Copying: 293952/1048576 [kB] (10216 kBps) [2024-12-05T23:59:14.321Z] Copying: 297/1024 [MB] (10 MBps) [2024-12-05T23:59:15.263Z] Copying: 314800/1048576 [kB] (10056 kBps) [2024-12-05T23:59:16.205Z] Copying: 317/1024 [MB] (10 MBps) [2024-12-05T23:59:17.145Z] Copying: 335272/1048576 [kB] (9696 kBps) [2024-12-05T23:59:18.096Z] Copying: 337/1024 [MB] (10 MBps) [2024-12-05T23:59:19.037Z] Copying: 355588/1048576 [kB] (9924 kBps) [2024-12-05T23:59:19.981Z] Copying: 357/1024 [MB] (10 MBps) [2024-12-05T23:59:21.360Z] Copying: 375756/1048576 [kB] (9724 kBps) [2024-12-05T23:59:21.934Z] Copying: 376/1024 [MB] (10 MBps) [2024-12-05T23:59:23.318Z] Copying: 396176/1048576 [kB] (10156 kBps) [2024-12-05T23:59:24.258Z] Copying: 397/1024 [MB] (10 MBps) [2024-12-05T23:59:25.203Z] Copying: 416800/1048576 [kB] (9772 kBps) [2024-12-05T23:59:26.152Z] Copying: 417/1024 [MB] (10 MBps) [2024-12-05T23:59:27.086Z] Copying: 429/1024 [MB] (11 MBps) [2024-12-05T23:59:28.020Z] Copying: 450196/1048576 [kB] (10064 kBps) [2024-12-05T23:59:28.965Z] Copying: 450/1024 [MB] (11 MBps) [2024-12-05T23:59:30.344Z] Copying: 461/1024 [MB] (10 MBps) [2024-12-05T23:59:31.282Z] Copying: 471/1024 [MB] (10 MBps) [2024-12-05T23:59:32.219Z] Copying: 483/1024 [MB] (11 MBps) [2024-12-05T23:59:33.165Z] Copying: 505208/1048576 [kB] (9968 kBps) [2024-12-05T23:59:34.100Z] Copying: 503/1024 [MB] (10 MBps) [2024-12-05T23:59:35.032Z] Copying: 525776/1048576 [kB] (9988 kBps) [2024-12-05T23:59:35.980Z] Copying: 523/1024 [MB] (10 MBps) [2024-12-05T23:59:37.359Z] Copying: 535/1024 [MB] (11 MBps) [2024-12-05T23:59:37.925Z] Copying: 546/1024 [MB] (11 MBps) [2024-12-05T23:59:39.311Z] Copying: 557/1024 [MB] (10 MBps) [2024-12-05T23:59:40.246Z] Copying: 580876/1048576 [kB] (9940 kBps) [2024-12-05T23:59:41.196Z] Copying: 577/1024 [MB] (10 MBps) [2024-12-05T23:59:42.137Z] Copying: 589/1024 [MB] (11 MBps) [2024-12-05T23:59:43.078Z] Copying: 600/1024 [MB] (10 MBps) [2024-12-05T23:59:44.021Z] Copying: 610/1024 [MB] (10 MBps) [2024-12-05T23:59:44.963Z] Copying: 621/1024 [MB] (10 MBps) [2024-12-05T23:59:46.345Z] Copying: 646208/1048576 [kB] (9840 kBps) [2024-12-05T23:59:47.283Z] Copying: 641/1024 [MB] (10 MBps) [2024-12-05T23:59:48.222Z] Copying: 651/1024 [MB] (10 MBps) [2024-12-05T23:59:49.158Z] Copying: 661/1024 [MB] (10 MBps) [2024-12-05T23:59:50.098Z] Copying: 674/1024 [MB] (12 MBps) [2024-12-05T23:59:51.038Z] Copying: 684/1024 [MB] (10 MBps) [2024-12-05T23:59:51.977Z] Copying: 695/1024 [MB] (11 MBps) [2024-12-05T23:59:53.358Z] Copying: 707/1024 [MB] (11 MBps) [2024-12-05T23:59:53.930Z] Copying: 718/1024 [MB] (11 MBps) [2024-12-05T23:59:55.304Z] Copying: 732/1024 [MB] (14 MBps) [2024-12-05T23:59:56.236Z] Copying: 744/1024 [MB] (12 MBps) [2024-12-05T23:59:57.168Z] Copying: 757/1024 [MB] (13 MBps) [2024-12-05T23:59:58.159Z] Copying: 770/1024 [MB] (13 MBps) [2024-12-05T23:59:59.094Z] Copying: 784/1024 [MB] (13 MBps) [2024-12-06T00:00:00.024Z] Copying: 796/1024 [MB] (12 MBps) [2024-12-06T00:00:00.954Z] Copying: 808/1024 [MB] (12 MBps) [2024-12-06T00:00:01.924Z] Copying: 820/1024 [MB] (11 MBps) [2024-12-06T00:00:03.294Z] Copying: 831/1024 [MB] (10 MBps) [2024-12-06T00:00:04.235Z] Copying: 842/1024 [MB] (11 MBps) [2024-12-06T00:00:05.176Z] Copying: 852/1024 [MB] (10 MBps) [2024-12-06T00:00:06.115Z] Copying: 882520/1048576 [kB] (9272 kBps) [2024-12-06T00:00:07.046Z] Copying: 892272/1048576 [kB] (9752 kBps) [2024-12-06T00:00:07.979Z] Copying: 884/1024 [MB] (12 MBps) [2024-12-06T00:00:09.357Z] Copying: 895/1024 [MB] (11 MBps) [2024-12-06T00:00:09.930Z] Copying: 911/1024 [MB] (16 MBps) [2024-12-06T00:00:11.318Z] Copying: 922/1024 [MB] (10 MBps) [2024-12-06T00:00:12.258Z] Copying: 954328/1048576 [kB] (9524 kBps) [2024-12-06T00:00:13.198Z] Copying: 963936/1048576 [kB] (9608 kBps) [2024-12-06T00:00:14.255Z] Copying: 951/1024 [MB] (10 MBps) [2024-12-06T00:00:15.196Z] Copying: 984/1024 [MB] (32 MBps) [2024-12-06T00:00:15.196Z] Copying: 1017/1024 [MB] (33 MBps) [2024-12-06T00:00:15.196Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-06 00:00:15.083502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.487 [2024-12-06 00:00:15.083620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:42.487 [2024-12-06 00:00:15.083682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:42.487 [2024-12-06 00:00:15.083706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.487 [2024-12-06 00:00:15.083740] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:42.487 [2024-12-06 00:00:15.086379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.487 [2024-12-06 00:00:15.086487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:42.487 [2024-12-06 00:00:15.086553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.601 ms 00:22:42.487 [2024-12-06 00:00:15.086576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.487 [2024-12-06 00:00:15.088102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.487 [2024-12-06 00:00:15.088206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:42.487 [2024-12-06 00:00:15.088343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:22:42.487 [2024-12-06 00:00:15.088367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.487 [2024-12-06 00:00:15.102030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.487 [2024-12-06 00:00:15.102141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:42.487 [2024-12-06 00:00:15.102198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.596 ms 00:22:42.487 [2024-12-06 00:00:15.102221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.487 [2024-12-06 00:00:15.108461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.487 [2024-12-06 00:00:15.108577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:42.487 [2024-12-06 00:00:15.108591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:22:42.487 [2024-12-06 00:00:15.108599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.487 [2024-12-06 00:00:15.132363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.487 [2024-12-06 00:00:15.132503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:42.487 [2024-12-06 00:00:15.132519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.715 ms 00:22:42.487 [2024-12-06 00:00:15.132527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.487 [2024-12-06 00:00:15.146298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.487 [2024-12-06 00:00:15.146330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:42.487 [2024-12-06 00:00:15.146342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.742 ms 00:22:42.487 [2024-12-06 00:00:15.146351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.487 [2024-12-06 00:00:15.146479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.487 [2024-12-06 00:00:15.146492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:42.487 [2024-12-06 00:00:15.146501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:22:42.487 [2024-12-06 00:00:15.146509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.487 [2024-12-06 00:00:15.169697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.487 [2024-12-06 00:00:15.169727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:42.487 [2024-12-06 00:00:15.169738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.174 ms 00:22:42.488 [2024-12-06 00:00:15.169745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.488 [2024-12-06 00:00:15.192236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.488 [2024-12-06 00:00:15.192266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:42.488 [2024-12-06 00:00:15.192277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.460 ms 00:22:42.488 [2024-12-06 00:00:15.192284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.746 [2024-12-06 00:00:15.214608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.746 [2024-12-06 00:00:15.214641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:42.746 [2024-12-06 00:00:15.214651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.293 ms 00:22:42.746 [2024-12-06 00:00:15.214659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.746 [2024-12-06 00:00:15.237212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.746 [2024-12-06 00:00:15.237340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:42.746 [2024-12-06 00:00:15.237355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.499 ms 00:22:42.746 [2024-12-06 00:00:15.237362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.746 [2024-12-06 00:00:15.237392] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:42.746 [2024-12-06 00:00:15.237408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:42.746 [2024-12-06 00:00:15.237578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.237998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:42.747 [2024-12-06 00:00:15.238177] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:42.747 [2024-12-06 00:00:15.238189] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5d1b3a65-8ffa-42fb-a989-b29d7582f516 00:22:42.747 [2024-12-06 00:00:15.238197] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:42.747 [2024-12-06 00:00:15.238204] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:42.747 [2024-12-06 00:00:15.238211] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:42.747 [2024-12-06 00:00:15.238219] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:42.747 [2024-12-06 00:00:15.238225] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:42.747 [2024-12-06 00:00:15.238238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:42.747 [2024-12-06 00:00:15.238245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:42.747 [2024-12-06 00:00:15.238252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:42.747 [2024-12-06 00:00:15.238258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:42.747 [2024-12-06 00:00:15.238265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.747 [2024-12-06 00:00:15.238273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:42.747 [2024-12-06 00:00:15.238281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:22:42.747 [2024-12-06 00:00:15.238287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.250721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.747 [2024-12-06 00:00:15.250751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:42.747 [2024-12-06 00:00:15.250761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.415 ms 00:22:42.747 [2024-12-06 00:00:15.250769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.251138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.747 [2024-12-06 00:00:15.251148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:42.747 [2024-12-06 00:00:15.251157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:22:42.747 [2024-12-06 00:00:15.251169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.283733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.283780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:42.747 [2024-12-06 00:00:15.283792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.747 [2024-12-06 00:00:15.283800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.283855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.283863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:42.747 [2024-12-06 00:00:15.283871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.747 [2024-12-06 00:00:15.283882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.283956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.283985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:42.747 [2024-12-06 00:00:15.283993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.747 [2024-12-06 00:00:15.284001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.284015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.284023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:42.747 [2024-12-06 00:00:15.284031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.747 [2024-12-06 00:00:15.284038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.360546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.360584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:42.747 [2024-12-06 00:00:15.360594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.747 [2024-12-06 00:00:15.360603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.423630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.423812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:42.747 [2024-12-06 00:00:15.423827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.747 [2024-12-06 00:00:15.423840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.423888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.423897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:42.747 [2024-12-06 00:00:15.423905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.747 [2024-12-06 00:00:15.423912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.423959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.423990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:42.747 [2024-12-06 00:00:15.423998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.747 [2024-12-06 00:00:15.424006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.424095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.424105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:42.747 [2024-12-06 00:00:15.424112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.747 [2024-12-06 00:00:15.424120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.424149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.424157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:42.747 [2024-12-06 00:00:15.424164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.747 [2024-12-06 00:00:15.424171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.747 [2024-12-06 00:00:15.424210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.747 [2024-12-06 00:00:15.424222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:42.748 [2024-12-06 00:00:15.424230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.748 [2024-12-06 00:00:15.424237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.748 [2024-12-06 00:00:15.424275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.748 [2024-12-06 00:00:15.424284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:42.748 [2024-12-06 00:00:15.424292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.748 [2024-12-06 00:00:15.424299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.748 [2024-12-06 00:00:15.424409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.878 ms, result 0 00:22:44.127 00:22:44.127 00:22:44.127 00:00:16 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:44.127 [2024-12-06 00:00:16.608064] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:22:44.127 [2024-12-06 00:00:16.608185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78601 ] 00:22:44.127 [2024-12-06 00:00:16.767588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.389 [2024-12-06 00:00:16.867600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.652 [2024-12-06 00:00:17.125663] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:44.652 [2024-12-06 00:00:17.125731] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:44.652 [2024-12-06 00:00:17.278824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.278885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:44.652 [2024-12-06 00:00:17.278899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:44.652 [2024-12-06 00:00:17.278908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.278958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.278991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.652 [2024-12-06 00:00:17.279000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:44.652 [2024-12-06 00:00:17.279007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.279027] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:44.652 [2024-12-06 00:00:17.279737] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:44.652 [2024-12-06 00:00:17.279758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.279766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.652 [2024-12-06 00:00:17.279775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:22:44.652 [2024-12-06 00:00:17.279782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.280876] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:44.652 [2024-12-06 00:00:17.293007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.293039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:44.652 [2024-12-06 00:00:17.293051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.132 ms 00:22:44.652 [2024-12-06 00:00:17.293059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.293115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.293124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:44.652 [2024-12-06 00:00:17.293132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:44.652 [2024-12-06 00:00:17.293140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.298117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.298154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.652 [2024-12-06 00:00:17.298170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.918 ms 00:22:44.652 [2024-12-06 00:00:17.298185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.298278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.298291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.652 [2024-12-06 00:00:17.298302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:44.652 [2024-12-06 00:00:17.298313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.298352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.298361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:44.652 [2024-12-06 00:00:17.298370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:44.652 [2024-12-06 00:00:17.298377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.298401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:44.652 [2024-12-06 00:00:17.301765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.301793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.652 [2024-12-06 00:00:17.301805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.369 ms 00:22:44.652 [2024-12-06 00:00:17.301813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.301844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.301852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:44.652 [2024-12-06 00:00:17.301861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:44.652 [2024-12-06 00:00:17.301868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.301887] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:44.652 [2024-12-06 00:00:17.301906] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:44.652 [2024-12-06 00:00:17.301940] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:44.652 [2024-12-06 00:00:17.301957] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:44.652 [2024-12-06 00:00:17.302068] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:44.652 [2024-12-06 00:00:17.302079] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:44.652 [2024-12-06 00:00:17.302090] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:44.652 [2024-12-06 00:00:17.302100] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:44.652 [2024-12-06 00:00:17.302109] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:44.652 [2024-12-06 00:00:17.302117] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:44.652 [2024-12-06 00:00:17.302124] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:44.652 [2024-12-06 00:00:17.302134] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:44.652 [2024-12-06 00:00:17.302141] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:44.652 [2024-12-06 00:00:17.302149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.302156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:44.652 [2024-12-06 00:00:17.302163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:22:44.652 [2024-12-06 00:00:17.302170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.302253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.652 [2024-12-06 00:00:17.302260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:44.652 [2024-12-06 00:00:17.302268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:44.652 [2024-12-06 00:00:17.302275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.652 [2024-12-06 00:00:17.302391] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:44.652 [2024-12-06 00:00:17.302401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:44.652 [2024-12-06 00:00:17.302409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:44.652 [2024-12-06 00:00:17.302417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.652 [2024-12-06 00:00:17.302424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:44.652 [2024-12-06 00:00:17.302431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:44.652 [2024-12-06 00:00:17.302437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:44.652 [2024-12-06 00:00:17.302444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:44.652 [2024-12-06 00:00:17.302451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:44.652 [2024-12-06 00:00:17.302457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:44.652 [2024-12-06 00:00:17.302464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:44.652 [2024-12-06 00:00:17.302470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:44.652 [2024-12-06 00:00:17.302476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:44.652 [2024-12-06 00:00:17.302489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:44.652 [2024-12-06 00:00:17.302495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:44.653 [2024-12-06 00:00:17.302502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.653 [2024-12-06 00:00:17.302508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:44.653 [2024-12-06 00:00:17.302515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:44.653 [2024-12-06 00:00:17.302521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.653 [2024-12-06 00:00:17.302528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:44.653 [2024-12-06 00:00:17.302535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:44.653 [2024-12-06 00:00:17.302541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.653 [2024-12-06 00:00:17.302547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:44.653 [2024-12-06 00:00:17.302556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:44.653 [2024-12-06 00:00:17.302562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.653 [2024-12-06 00:00:17.302569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:44.653 [2024-12-06 00:00:17.302575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:44.653 [2024-12-06 00:00:17.302581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.653 [2024-12-06 00:00:17.302587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:44.653 [2024-12-06 00:00:17.302594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:44.653 [2024-12-06 00:00:17.302601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.653 [2024-12-06 00:00:17.302607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:44.653 [2024-12-06 00:00:17.302614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:44.653 [2024-12-06 00:00:17.302620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:44.653 [2024-12-06 00:00:17.302626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:44.653 [2024-12-06 00:00:17.302633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:44.653 [2024-12-06 00:00:17.302639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:44.653 [2024-12-06 00:00:17.302645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:44.653 [2024-12-06 00:00:17.302652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:44.653 [2024-12-06 00:00:17.302658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.653 [2024-12-06 00:00:17.302665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:44.653 [2024-12-06 00:00:17.302671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:44.653 [2024-12-06 00:00:17.302677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.653 [2024-12-06 00:00:17.302683] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:44.653 [2024-12-06 00:00:17.302690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:44.653 [2024-12-06 00:00:17.302697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:44.653 [2024-12-06 00:00:17.302704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.653 [2024-12-06 00:00:17.302712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:44.653 [2024-12-06 00:00:17.302718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:44.653 [2024-12-06 00:00:17.302724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:44.653 [2024-12-06 00:00:17.302731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:44.653 [2024-12-06 00:00:17.302737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:44.653 [2024-12-06 00:00:17.302743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:44.653 [2024-12-06 00:00:17.302751] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:44.653 [2024-12-06 00:00:17.302760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:44.653 [2024-12-06 00:00:17.302772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:44.653 [2024-12-06 00:00:17.302779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:44.653 [2024-12-06 00:00:17.302786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:44.653 [2024-12-06 00:00:17.302793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:44.653 [2024-12-06 00:00:17.302800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:44.653 [2024-12-06 00:00:17.302806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:44.653 [2024-12-06 00:00:17.302813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:44.653 [2024-12-06 00:00:17.302820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:44.653 [2024-12-06 00:00:17.302826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:44.653 [2024-12-06 00:00:17.302833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:44.653 [2024-12-06 00:00:17.302840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:44.653 [2024-12-06 00:00:17.302847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:44.653 [2024-12-06 00:00:17.302855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:44.653 [2024-12-06 00:00:17.302862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:44.653 [2024-12-06 00:00:17.302869] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:44.653 [2024-12-06 00:00:17.302877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:44.653 [2024-12-06 00:00:17.302885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:44.653 [2024-12-06 00:00:17.302892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:44.653 [2024-12-06 00:00:17.302899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:44.653 [2024-12-06 00:00:17.302906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:44.653 [2024-12-06 00:00:17.302913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.653 [2024-12-06 00:00:17.302920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:44.653 [2024-12-06 00:00:17.302927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:22:44.653 [2024-12-06 00:00:17.302934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.653 [2024-12-06 00:00:17.328860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.653 [2024-12-06 00:00:17.328904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:44.653 [2024-12-06 00:00:17.328916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.870 ms 00:22:44.653 [2024-12-06 00:00:17.328927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.653 [2024-12-06 00:00:17.329041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.653 [2024-12-06 00:00:17.329051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:44.653 [2024-12-06 00:00:17.329060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:44.653 [2024-12-06 00:00:17.329067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.371832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.371878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:44.915 [2024-12-06 00:00:17.371891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.702 ms 00:22:44.915 [2024-12-06 00:00:17.371899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.371948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.371958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:44.915 [2024-12-06 00:00:17.371984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:44.915 [2024-12-06 00:00:17.371991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.372387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.372404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:44.915 [2024-12-06 00:00:17.372413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:22:44.915 [2024-12-06 00:00:17.372420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.372546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.372556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:44.915 [2024-12-06 00:00:17.372568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:44.915 [2024-12-06 00:00:17.372576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.385480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.385645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:44.915 [2024-12-06 00:00:17.385661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.887 ms 00:22:44.915 [2024-12-06 00:00:17.385669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.397616] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:44.915 [2024-12-06 00:00:17.397647] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:44.915 [2024-12-06 00:00:17.397659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.397667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:44.915 [2024-12-06 00:00:17.397675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.889 ms 00:22:44.915 [2024-12-06 00:00:17.397683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.421810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.421845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:44.915 [2024-12-06 00:00:17.421857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.091 ms 00:22:44.915 [2024-12-06 00:00:17.421864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.433324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.433355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:44.915 [2024-12-06 00:00:17.433365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.415 ms 00:22:44.915 [2024-12-06 00:00:17.433372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.444582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.444705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:44.915 [2024-12-06 00:00:17.444720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.179 ms 00:22:44.915 [2024-12-06 00:00:17.444727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.445344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.445365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:44.915 [2024-12-06 00:00:17.445376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:22:44.915 [2024-12-06 00:00:17.445383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.501066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.501124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:44.915 [2024-12-06 00:00:17.501142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.664 ms 00:22:44.915 [2024-12-06 00:00:17.501150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.511876] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:44.915 [2024-12-06 00:00:17.514696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.514728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:44.915 [2024-12-06 00:00:17.514741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.488 ms 00:22:44.915 [2024-12-06 00:00:17.514751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.514855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.514866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:44.915 [2024-12-06 00:00:17.514877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:44.915 [2024-12-06 00:00:17.514885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.514950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.514961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:44.915 [2024-12-06 00:00:17.514981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:44.915 [2024-12-06 00:00:17.514989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.515008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.515016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:44.915 [2024-12-06 00:00:17.515024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:44.915 [2024-12-06 00:00:17.515031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.515064] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:44.915 [2024-12-06 00:00:17.515073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.915 [2024-12-06 00:00:17.515081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:44.915 [2024-12-06 00:00:17.515089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:44.915 [2024-12-06 00:00:17.515096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.915 [2024-12-06 00:00:17.538577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.916 [2024-12-06 00:00:17.538704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:44.916 [2024-12-06 00:00:17.538796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.463 ms 00:22:44.916 [2024-12-06 00:00:17.538818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.916 [2024-12-06 00:00:17.538894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.916 [2024-12-06 00:00:17.539014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:44.916 [2024-12-06 00:00:17.539039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:44.916 [2024-12-06 00:00:17.539058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.916 [2024-12-06 00:00:17.539955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.724 ms, result 0 00:22:46.301  [2024-12-06T00:00:19.953Z] Copying: 27/1024 [MB] (27 MBps) [2024-12-06T00:00:20.888Z] Copying: 44/1024 [MB] (16 MBps) [2024-12-06T00:00:21.821Z] Copying: 65/1024 [MB] (21 MBps) [2024-12-06T00:00:22.757Z] Copying: 85/1024 [MB] (19 MBps) [2024-12-06T00:00:24.140Z] Copying: 100/1024 [MB] (15 MBps) [2024-12-06T00:00:25.083Z] Copying: 114/1024 [MB] (13 MBps) [2024-12-06T00:00:26.024Z] Copying: 132/1024 [MB] (18 MBps) [2024-12-06T00:00:26.962Z] Copying: 152/1024 [MB] (19 MBps) [2024-12-06T00:00:27.897Z] Copying: 172/1024 [MB] (20 MBps) [2024-12-06T00:00:28.837Z] Copying: 187/1024 [MB] (15 MBps) [2024-12-06T00:00:29.769Z] Copying: 204/1024 [MB] (16 MBps) [2024-12-06T00:00:31.140Z] Copying: 220/1024 [MB] (16 MBps) [2024-12-06T00:00:32.074Z] Copying: 245/1024 [MB] (24 MBps) [2024-12-06T00:00:33.088Z] Copying: 260/1024 [MB] (15 MBps) [2024-12-06T00:00:34.022Z] Copying: 274/1024 [MB] (13 MBps) [2024-12-06T00:00:34.960Z] Copying: 289/1024 [MB] (15 MBps) [2024-12-06T00:00:35.902Z] Copying: 305/1024 [MB] (15 MBps) [2024-12-06T00:00:36.841Z] Copying: 315/1024 [MB] (10 MBps) [2024-12-06T00:00:37.781Z] Copying: 325/1024 [MB] (10 MBps) [2024-12-06T00:00:38.725Z] Copying: 335/1024 [MB] (10 MBps) [2024-12-06T00:00:40.112Z] Copying: 353920/1048576 [kB] (10044 kBps) [2024-12-06T00:00:41.049Z] Copying: 356/1024 [MB] (11 MBps) [2024-12-06T00:00:41.983Z] Copying: 367/1024 [MB] (11 MBps) [2024-12-06T00:00:42.920Z] Copying: 379/1024 [MB] (11 MBps) [2024-12-06T00:00:43.861Z] Copying: 391/1024 [MB] (11 MBps) [2024-12-06T00:00:44.799Z] Copying: 402/1024 [MB] (11 MBps) [2024-12-06T00:00:45.735Z] Copying: 416/1024 [MB] (13 MBps) [2024-12-06T00:00:47.110Z] Copying: 428/1024 [MB] (12 MBps) [2024-12-06T00:00:48.044Z] Copying: 441/1024 [MB] (12 MBps) [2024-12-06T00:00:48.978Z] Copying: 453/1024 [MB] (12 MBps) [2024-12-06T00:00:49.913Z] Copying: 465/1024 [MB] (11 MBps) [2024-12-06T00:00:50.851Z] Copying: 477/1024 [MB] (11 MBps) [2024-12-06T00:00:51.790Z] Copying: 487/1024 [MB] (10 MBps) [2024-12-06T00:00:52.728Z] Copying: 498/1024 [MB] (11 MBps) [2024-12-06T00:00:54.102Z] Copying: 510/1024 [MB] (11 MBps) [2024-12-06T00:00:55.039Z] Copying: 522/1024 [MB] (12 MBps) [2024-12-06T00:00:55.972Z] Copying: 534/1024 [MB] (12 MBps) [2024-12-06T00:00:56.915Z] Copying: 546/1024 [MB] (11 MBps) [2024-12-06T00:00:57.853Z] Copying: 557/1024 [MB] (10 MBps) [2024-12-06T00:00:58.786Z] Copying: 567/1024 [MB] (10 MBps) [2024-12-06T00:00:59.720Z] Copying: 581/1024 [MB] (13 MBps) [2024-12-06T00:01:01.095Z] Copying: 592/1024 [MB] (11 MBps) [2024-12-06T00:01:02.028Z] Copying: 603/1024 [MB] (11 MBps) [2024-12-06T00:01:02.995Z] Copying: 616/1024 [MB] (12 MBps) [2024-12-06T00:01:03.931Z] Copying: 629/1024 [MB] (12 MBps) [2024-12-06T00:01:04.868Z] Copying: 641/1024 [MB] (12 MBps) [2024-12-06T00:01:05.803Z] Copying: 653/1024 [MB] (11 MBps) [2024-12-06T00:01:06.748Z] Copying: 664/1024 [MB] (11 MBps) [2024-12-06T00:01:08.133Z] Copying: 675/1024 [MB] (10 MBps) [2024-12-06T00:01:09.068Z] Copying: 686/1024 [MB] (10 MBps) [2024-12-06T00:01:10.005Z] Copying: 698/1024 [MB] (12 MBps) [2024-12-06T00:01:10.941Z] Copying: 710/1024 [MB] (12 MBps) [2024-12-06T00:01:11.904Z] Copying: 722/1024 [MB] (11 MBps) [2024-12-06T00:01:12.838Z] Copying: 735/1024 [MB] (12 MBps) [2024-12-06T00:01:13.771Z] Copying: 747/1024 [MB] (12 MBps) [2024-12-06T00:01:15.148Z] Copying: 758/1024 [MB] (11 MBps) [2024-12-06T00:01:16.079Z] Copying: 769/1024 [MB] (11 MBps) [2024-12-06T00:01:17.010Z] Copying: 784/1024 [MB] (14 MBps) [2024-12-06T00:01:17.949Z] Copying: 809/1024 [MB] (25 MBps) [2024-12-06T00:01:18.894Z] Copying: 824/1024 [MB] (14 MBps) [2024-12-06T00:01:19.837Z] Copying: 838/1024 [MB] (14 MBps) [2024-12-06T00:01:20.803Z] Copying: 851/1024 [MB] (12 MBps) [2024-12-06T00:01:21.756Z] Copying: 869/1024 [MB] (17 MBps) [2024-12-06T00:01:23.143Z] Copying: 884/1024 [MB] (15 MBps) [2024-12-06T00:01:24.084Z] Copying: 902/1024 [MB] (17 MBps) [2024-12-06T00:01:25.024Z] Copying: 916/1024 [MB] (14 MBps) [2024-12-06T00:01:25.962Z] Copying: 935/1024 [MB] (19 MBps) [2024-12-06T00:01:26.902Z] Copying: 954/1024 [MB] (19 MBps) [2024-12-06T00:01:27.845Z] Copying: 969/1024 [MB] (14 MBps) [2024-12-06T00:01:28.791Z] Copying: 979/1024 [MB] (10 MBps) [2024-12-06T00:01:29.736Z] Copying: 993/1024 [MB] (13 MBps) [2024-12-06T00:01:31.124Z] Copying: 1007/1024 [MB] (14 MBps) [2024-12-06T00:01:31.124Z] Copying: 1019/1024 [MB] (11 MBps) [2024-12-06T00:01:31.124Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-06 00:01:30.935081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:30.935130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:58.415 [2024-12-06 00:01:30.935144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:58.415 [2024-12-06 00:01:30.935152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:30.935171] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:58.415 [2024-12-06 00:01:30.937817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:30.937852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:58.415 [2024-12-06 00:01:30.937862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.632 ms 00:23:58.415 [2024-12-06 00:01:30.937870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:30.938269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:30.938281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:58.415 [2024-12-06 00:01:30.938289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:23:58.415 [2024-12-06 00:01:30.938296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:30.941723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:30.941742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:58.415 [2024-12-06 00:01:30.941752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.414 ms 00:23:58.415 [2024-12-06 00:01:30.941763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:30.948207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:30.948233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:58.415 [2024-12-06 00:01:30.948244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.430 ms 00:23:58.415 [2024-12-06 00:01:30.948252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:30.973928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:30.973961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:58.415 [2024-12-06 00:01:30.973993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.631 ms 00:23:58.415 [2024-12-06 00:01:30.974001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:30.991447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:30.991600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:58.415 [2024-12-06 00:01:30.991617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.413 ms 00:23:58.415 [2024-12-06 00:01:30.991626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:30.991805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:30.991817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:58.415 [2024-12-06 00:01:30.991826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:58.415 [2024-12-06 00:01:30.991833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:31.017510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:31.017543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:58.415 [2024-12-06 00:01:31.017553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.663 ms 00:23:58.415 [2024-12-06 00:01:31.017560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:31.042330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:31.042361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:58.415 [2024-12-06 00:01:31.042371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.737 ms 00:23:58.415 [2024-12-06 00:01:31.042378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:31.065467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:31.065593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:58.415 [2024-12-06 00:01:31.065608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.057 ms 00:23:58.415 [2024-12-06 00:01:31.065615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:31.087979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.415 [2024-12-06 00:01:31.088092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:58.415 [2024-12-06 00:01:31.088106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.316 ms 00:23:58.415 [2024-12-06 00:01:31.088113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.415 [2024-12-06 00:01:31.088139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:58.415 [2024-12-06 00:01:31.088157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:58.415 [2024-12-06 00:01:31.088168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:58.415 [2024-12-06 00:01:31.088189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:58.415 [2024-12-06 00:01:31.088197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:58.415 [2024-12-06 00:01:31.088205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:58.415 [2024-12-06 00:01:31.088212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:58.415 [2024-12-06 00:01:31.088219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:58.415 [2024-12-06 00:01:31.088227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:58.415 [2024-12-06 00:01:31.088234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:58.415 [2024-12-06 00:01:31.088242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:58.415 [2024-12-06 00:01:31.088249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:58.416 [2024-12-06 00:01:31.088916] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:58.417 [2024-12-06 00:01:31.088924] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5d1b3a65-8ffa-42fb-a989-b29d7582f516 00:23:58.417 [2024-12-06 00:01:31.088932] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:58.417 [2024-12-06 00:01:31.088938] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:58.417 [2024-12-06 00:01:31.088945] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:58.417 [2024-12-06 00:01:31.088952] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:58.417 [2024-12-06 00:01:31.088985] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:58.417 [2024-12-06 00:01:31.088994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:58.417 [2024-12-06 00:01:31.089001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:58.417 [2024-12-06 00:01:31.089007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:58.417 [2024-12-06 00:01:31.089013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:58.417 [2024-12-06 00:01:31.089019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.417 [2024-12-06 00:01:31.089027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:58.417 [2024-12-06 00:01:31.089036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:23:58.417 [2024-12-06 00:01:31.089044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.417 [2024-12-06 00:01:31.101448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.417 [2024-12-06 00:01:31.101476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:58.417 [2024-12-06 00:01:31.101486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.388 ms 00:23:58.417 [2024-12-06 00:01:31.101493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.417 [2024-12-06 00:01:31.101824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.417 [2024-12-06 00:01:31.101833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:58.417 [2024-12-06 00:01:31.101844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:23:58.417 [2024-12-06 00:01:31.101851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.678 [2024-12-06 00:01:31.133958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.678 [2024-12-06 00:01:31.134000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:58.678 [2024-12-06 00:01:31.134010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.678 [2024-12-06 00:01:31.134017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.678 [2024-12-06 00:01:31.134065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.678 [2024-12-06 00:01:31.134074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:58.678 [2024-12-06 00:01:31.134085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.678 [2024-12-06 00:01:31.134092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.678 [2024-12-06 00:01:31.134154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.678 [2024-12-06 00:01:31.134164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:58.678 [2024-12-06 00:01:31.134172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.678 [2024-12-06 00:01:31.134179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.678 [2024-12-06 00:01:31.134193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.678 [2024-12-06 00:01:31.134200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:58.678 [2024-12-06 00:01:31.134207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.678 [2024-12-06 00:01:31.134217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.678 [2024-12-06 00:01:31.210396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.678 [2024-12-06 00:01:31.210553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:58.678 [2024-12-06 00:01:31.210568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.678 [2024-12-06 00:01:31.210576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.678 [2024-12-06 00:01:31.295136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.678 [2024-12-06 00:01:31.295197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:58.678 [2024-12-06 00:01:31.295220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.678 [2024-12-06 00:01:31.295233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.678 [2024-12-06 00:01:31.295308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.678 [2024-12-06 00:01:31.295322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:58.678 [2024-12-06 00:01:31.295334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.678 [2024-12-06 00:01:31.295346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.678 [2024-12-06 00:01:31.295416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.678 [2024-12-06 00:01:31.295430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:58.678 [2024-12-06 00:01:31.295442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.678 [2024-12-06 00:01:31.295453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.678 [2024-12-06 00:01:31.295579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.678 [2024-12-06 00:01:31.295593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:58.678 [2024-12-06 00:01:31.295606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.678 [2024-12-06 00:01:31.295617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.678 [2024-12-06 00:01:31.295660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.678 [2024-12-06 00:01:31.295672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:58.679 [2024-12-06 00:01:31.295684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.679 [2024-12-06 00:01:31.295696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.679 [2024-12-06 00:01:31.295745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.679 [2024-12-06 00:01:31.295758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:58.679 [2024-12-06 00:01:31.295770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.679 [2024-12-06 00:01:31.295781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.679 [2024-12-06 00:01:31.295832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.679 [2024-12-06 00:01:31.295846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:58.679 [2024-12-06 00:01:31.295859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.679 [2024-12-06 00:01:31.295871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.679 [2024-12-06 00:01:31.296050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 360.895 ms, result 0 00:23:59.624 00:23:59.624 00:23:59.624 00:01:31 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:01.541 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:01.541 00:01:34 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:01.541 [2024-12-06 00:01:34.144541] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:24:01.541 [2024-12-06 00:01:34.144729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79409 ] 00:24:01.803 [2024-12-06 00:01:34.294493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.803 [2024-12-06 00:01:34.370899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.066 [2024-12-06 00:01:34.580314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:02.066 [2024-12-06 00:01:34.580496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:02.066 [2024-12-06 00:01:34.731219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.731347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:02.066 [2024-12-06 00:01:34.731399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:02.066 [2024-12-06 00:01:34.731418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.731470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.731492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.066 [2024-12-06 00:01:34.731507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:02.066 [2024-12-06 00:01:34.731521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.731546] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:02.066 [2024-12-06 00:01:34.732103] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:02.066 [2024-12-06 00:01:34.732197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.732271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.066 [2024-12-06 00:01:34.732289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:24:02.066 [2024-12-06 00:01:34.732303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.733237] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:02.066 [2024-12-06 00:01:34.742791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.742900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:02.066 [2024-12-06 00:01:34.742941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.556 ms 00:24:02.066 [2024-12-06 00:01:34.742958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.743022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.743061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:02.066 [2024-12-06 00:01:34.743076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:02.066 [2024-12-06 00:01:34.743111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.747437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.747526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.066 [2024-12-06 00:01:34.747565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.262 ms 00:24:02.066 [2024-12-06 00:01:34.747587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.747649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.747667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.066 [2024-12-06 00:01:34.747682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:02.066 [2024-12-06 00:01:34.747696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.747736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.747789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:02.066 [2024-12-06 00:01:34.747807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:02.066 [2024-12-06 00:01:34.747821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.747850] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:02.066 [2024-12-06 00:01:34.750606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.750688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.066 [2024-12-06 00:01:34.750732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:24:02.066 [2024-12-06 00:01:34.750749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.750786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.750914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:02.066 [2024-12-06 00:01:34.750932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:02.066 [2024-12-06 00:01:34.750946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.750980] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:02.066 [2024-12-06 00:01:34.751029] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:02.066 [2024-12-06 00:01:34.751075] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:02.066 [2024-12-06 00:01:34.751104] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:02.066 [2024-12-06 00:01:34.751226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:02.066 [2024-12-06 00:01:34.751252] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:02.066 [2024-12-06 00:01:34.751316] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:02.066 [2024-12-06 00:01:34.751361] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:02.066 [2024-12-06 00:01:34.751384] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:02.066 [2024-12-06 00:01:34.751407] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:02.066 [2024-12-06 00:01:34.751421] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:02.066 [2024-12-06 00:01:34.751438] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:02.066 [2024-12-06 00:01:34.751509] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:02.066 [2024-12-06 00:01:34.751525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.751540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:02.066 [2024-12-06 00:01:34.751555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:24:02.066 [2024-12-06 00:01:34.751570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.751647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.066 [2024-12-06 00:01:34.751718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:02.066 [2024-12-06 00:01:34.751736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:02.066 [2024-12-06 00:01:34.751751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.066 [2024-12-06 00:01:34.751852] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:02.066 [2024-12-06 00:01:34.751873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:02.066 [2024-12-06 00:01:34.751888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:02.066 [2024-12-06 00:01:34.751929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.066 [2024-12-06 00:01:34.751946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:02.066 [2024-12-06 00:01:34.751960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:02.066 [2024-12-06 00:01:34.751991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:02.066 [2024-12-06 00:01:34.752008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:02.066 [2024-12-06 00:01:34.752049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:02.066 [2024-12-06 00:01:34.752065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:02.067 [2024-12-06 00:01:34.752082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:02.067 [2024-12-06 00:01:34.752096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:02.067 [2024-12-06 00:01:34.752113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:02.067 [2024-12-06 00:01:34.752152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:02.067 [2024-12-06 00:01:34.752169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:02.067 [2024-12-06 00:01:34.752194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.067 [2024-12-06 00:01:34.752237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:02.067 [2024-12-06 00:01:34.752256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:02.067 [2024-12-06 00:01:34.752275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.067 [2024-12-06 00:01:34.752322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:02.067 [2024-12-06 00:01:34.752341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:02.067 [2024-12-06 00:01:34.752355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.067 [2024-12-06 00:01:34.752368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:02.067 [2024-12-06 00:01:34.752382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:02.067 [2024-12-06 00:01:34.752421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.067 [2024-12-06 00:01:34.752439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:02.067 [2024-12-06 00:01:34.752456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:02.067 [2024-12-06 00:01:34.752470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.067 [2024-12-06 00:01:34.752484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:02.067 [2024-12-06 00:01:34.752497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:02.067 [2024-12-06 00:01:34.752529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.067 [2024-12-06 00:01:34.752545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:02.067 [2024-12-06 00:01:34.752593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:02.067 [2024-12-06 00:01:34.752641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:02.067 [2024-12-06 00:01:34.752657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:02.067 [2024-12-06 00:01:34.752671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:02.067 [2024-12-06 00:01:34.752685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:02.067 [2024-12-06 00:01:34.752698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:02.067 [2024-12-06 00:01:34.752712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:02.067 [2024-12-06 00:01:34.752749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.067 [2024-12-06 00:01:34.752765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:02.067 [2024-12-06 00:01:34.752779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:02.067 [2024-12-06 00:01:34.752792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.067 [2024-12-06 00:01:34.752806] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:02.067 [2024-12-06 00:01:34.752820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:02.067 [2024-12-06 00:01:34.752834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:02.067 [2024-12-06 00:01:34.752864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.067 [2024-12-06 00:01:34.752909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:02.067 [2024-12-06 00:01:34.752925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:02.067 [2024-12-06 00:01:34.752973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:02.067 [2024-12-06 00:01:34.752991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:02.067 [2024-12-06 00:01:34.753005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:02.067 [2024-12-06 00:01:34.753018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:02.067 [2024-12-06 00:01:34.753034] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:02.067 [2024-12-06 00:01:34.753080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.067 [2024-12-06 00:01:34.753106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:02.067 [2024-12-06 00:01:34.753128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:02.067 [2024-12-06 00:01:34.753150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:02.067 [2024-12-06 00:01:34.753191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:02.067 [2024-12-06 00:01:34.753213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:02.067 [2024-12-06 00:01:34.753235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:02.067 [2024-12-06 00:01:34.753256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:02.067 [2024-12-06 00:01:34.753298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:02.067 [2024-12-06 00:01:34.753320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:02.067 [2024-12-06 00:01:34.753365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:02.067 [2024-12-06 00:01:34.753388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:02.067 [2024-12-06 00:01:34.753427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:02.067 [2024-12-06 00:01:34.753450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:02.067 [2024-12-06 00:01:34.753472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:02.067 [2024-12-06 00:01:34.753493] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:02.067 [2024-12-06 00:01:34.753537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.067 [2024-12-06 00:01:34.753560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:02.067 [2024-12-06 00:01:34.753582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:02.067 [2024-12-06 00:01:34.753603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:02.067 [2024-12-06 00:01:34.753660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:02.067 [2024-12-06 00:01:34.753699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.067 [2024-12-06 00:01:34.753715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:02.067 [2024-12-06 00:01:34.753730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.900 ms 00:24:02.067 [2024-12-06 00:01:34.753744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.774462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.774555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.329 [2024-12-06 00:01:34.774566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.663 ms 00:24:02.329 [2024-12-06 00:01:34.774576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.774639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.774646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:02.329 [2024-12-06 00:01:34.774652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:02.329 [2024-12-06 00:01:34.774658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.825669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.825700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.329 [2024-12-06 00:01:34.825709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.972 ms 00:24:02.329 [2024-12-06 00:01:34.825716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.825749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.825757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.329 [2024-12-06 00:01:34.825766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:02.329 [2024-12-06 00:01:34.825772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.826111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.826160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.329 [2024-12-06 00:01:34.826169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:24:02.329 [2024-12-06 00:01:34.826176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.826275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.826285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.329 [2024-12-06 00:01:34.826295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:02.329 [2024-12-06 00:01:34.826301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.836759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.836786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.329 [2024-12-06 00:01:34.836794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.441 ms 00:24:02.329 [2024-12-06 00:01:34.836800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.846465] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:02.329 [2024-12-06 00:01:34.846502] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:02.329 [2024-12-06 00:01:34.846513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.846519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:02.329 [2024-12-06 00:01:34.846526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.637 ms 00:24:02.329 [2024-12-06 00:01:34.846532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.865127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.865153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:02.329 [2024-12-06 00:01:34.865163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.565 ms 00:24:02.329 [2024-12-06 00:01:34.865170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.874104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.874130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:02.329 [2024-12-06 00:01:34.874137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.896 ms 00:24:02.329 [2024-12-06 00:01:34.874143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.329 [2024-12-06 00:01:34.882743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.329 [2024-12-06 00:01:34.882767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:02.329 [2024-12-06 00:01:34.882774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.574 ms 00:24:02.329 [2024-12-06 00:01:34.882780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.330 [2024-12-06 00:01:34.883244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.330 [2024-12-06 00:01:34.883261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:02.330 [2024-12-06 00:01:34.883270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:24:02.330 [2024-12-06 00:01:34.883275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.330 [2024-12-06 00:01:34.937285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.330 [2024-12-06 00:01:34.937322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:02.330 [2024-12-06 00:01:34.937335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.996 ms 00:24:02.330 [2024-12-06 00:01:34.937342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.330 [2024-12-06 00:01:34.945190] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:02.330 [2024-12-06 00:01:34.947222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.330 [2024-12-06 00:01:34.947246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:02.330 [2024-12-06 00:01:34.947254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.846 ms 00:24:02.330 [2024-12-06 00:01:34.947261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.330 [2024-12-06 00:01:34.947332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.330 [2024-12-06 00:01:34.947340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:02.330 [2024-12-06 00:01:34.947349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:02.330 [2024-12-06 00:01:34.947355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.330 [2024-12-06 00:01:34.947397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.330 [2024-12-06 00:01:34.947404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:02.330 [2024-12-06 00:01:34.947411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:02.330 [2024-12-06 00:01:34.947417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.330 [2024-12-06 00:01:34.947440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.330 [2024-12-06 00:01:34.947447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:02.330 [2024-12-06 00:01:34.947453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:02.330 [2024-12-06 00:01:34.947459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.330 [2024-12-06 00:01:34.947484] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:02.330 [2024-12-06 00:01:34.947491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.330 [2024-12-06 00:01:34.947497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:02.330 [2024-12-06 00:01:34.947503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:02.330 [2024-12-06 00:01:34.947508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.330 [2024-12-06 00:01:34.965206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.330 [2024-12-06 00:01:34.965233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:02.330 [2024-12-06 00:01:34.965245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.684 ms 00:24:02.330 [2024-12-06 00:01:34.965251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.330 [2024-12-06 00:01:34.965301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.330 [2024-12-06 00:01:34.965308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:02.330 [2024-12-06 00:01:34.965315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:02.330 [2024-12-06 00:01:34.965320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.330 [2024-12-06 00:01:34.966056] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 234.497 ms, result 0 00:24:03.705  [2024-12-06T00:01:36.985Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-06T00:01:38.369Z] Copying: 41/1024 [MB] (20 MBps) [2024-12-06T00:01:39.309Z] Copying: 60/1024 [MB] (18 MBps) [2024-12-06T00:01:40.248Z] Copying: 77/1024 [MB] (17 MBps) [2024-12-06T00:01:41.190Z] Copying: 94/1024 [MB] (16 MBps) [2024-12-06T00:01:42.133Z] Copying: 119/1024 [MB] (25 MBps) [2024-12-06T00:01:43.076Z] Copying: 137/1024 [MB] (17 MBps) [2024-12-06T00:01:44.015Z] Copying: 153/1024 [MB] (16 MBps) [2024-12-06T00:01:45.403Z] Copying: 171/1024 [MB] (17 MBps) [2024-12-06T00:01:46.345Z] Copying: 182/1024 [MB] (11 MBps) [2024-12-06T00:01:47.290Z] Copying: 194/1024 [MB] (11 MBps) [2024-12-06T00:01:48.235Z] Copying: 205/1024 [MB] (11 MBps) [2024-12-06T00:01:49.181Z] Copying: 219/1024 [MB] (14 MBps) [2024-12-06T00:01:50.125Z] Copying: 237/1024 [MB] (17 MBps) [2024-12-06T00:01:51.068Z] Copying: 251/1024 [MB] (14 MBps) [2024-12-06T00:01:52.013Z] Copying: 261/1024 [MB] (10 MBps) [2024-12-06T00:01:53.400Z] Copying: 277/1024 [MB] (15 MBps) [2024-12-06T00:01:54.341Z] Copying: 293/1024 [MB] (16 MBps) [2024-12-06T00:01:55.270Z] Copying: 311/1024 [MB] (17 MBps) [2024-12-06T00:01:56.200Z] Copying: 344/1024 [MB] (33 MBps) [2024-12-06T00:01:57.138Z] Copying: 390/1024 [MB] (45 MBps) [2024-12-06T00:01:58.081Z] Copying: 412/1024 [MB] (22 MBps) [2024-12-06T00:01:59.023Z] Copying: 428/1024 [MB] (15 MBps) [2024-12-06T00:02:00.409Z] Copying: 443/1024 [MB] (15 MBps) [2024-12-06T00:02:01.347Z] Copying: 461/1024 [MB] (17 MBps) [2024-12-06T00:02:02.282Z] Copying: 481/1024 [MB] (20 MBps) [2024-12-06T00:02:03.219Z] Copying: 503/1024 [MB] (21 MBps) [2024-12-06T00:02:04.154Z] Copying: 525/1024 [MB] (21 MBps) [2024-12-06T00:02:05.164Z] Copying: 538/1024 [MB] (13 MBps) [2024-12-06T00:02:06.097Z] Copying: 559/1024 [MB] (20 MBps) [2024-12-06T00:02:07.034Z] Copying: 572/1024 [MB] (12 MBps) [2024-12-06T00:02:08.419Z] Copying: 589/1024 [MB] (17 MBps) [2024-12-06T00:02:08.987Z] Copying: 607/1024 [MB] (17 MBps) [2024-12-06T00:02:10.368Z] Copying: 628/1024 [MB] (21 MBps) [2024-12-06T00:02:11.310Z] Copying: 648/1024 [MB] (19 MBps) [2024-12-06T00:02:12.248Z] Copying: 658/1024 [MB] (10 MBps) [2024-12-06T00:02:13.188Z] Copying: 669/1024 [MB] (10 MBps) [2024-12-06T00:02:14.128Z] Copying: 679/1024 [MB] (10 MBps) [2024-12-06T00:02:15.070Z] Copying: 691/1024 [MB] (12 MBps) [2024-12-06T00:02:16.011Z] Copying: 701/1024 [MB] (10 MBps) [2024-12-06T00:02:17.394Z] Copying: 712/1024 [MB] (10 MBps) [2024-12-06T00:02:18.337Z] Copying: 722/1024 [MB] (10 MBps) [2024-12-06T00:02:19.279Z] Copying: 735/1024 [MB] (12 MBps) [2024-12-06T00:02:20.263Z] Copying: 746/1024 [MB] (10 MBps) [2024-12-06T00:02:21.205Z] Copying: 756/1024 [MB] (10 MBps) [2024-12-06T00:02:22.140Z] Copying: 784760/1048576 [kB] (10232 kBps) [2024-12-06T00:02:23.073Z] Copying: 778/1024 [MB] (11 MBps) [2024-12-06T00:02:24.007Z] Copying: 790/1024 [MB] (12 MBps) [2024-12-06T00:02:25.382Z] Copying: 801/1024 [MB] (11 MBps) [2024-12-06T00:02:26.330Z] Copying: 812/1024 [MB] (11 MBps) [2024-12-06T00:02:27.271Z] Copying: 823/1024 [MB] (10 MBps) [2024-12-06T00:02:28.211Z] Copying: 834/1024 [MB] (10 MBps) [2024-12-06T00:02:29.153Z] Copying: 848/1024 [MB] (14 MBps) [2024-12-06T00:02:30.097Z] Copying: 872/1024 [MB] (24 MBps) [2024-12-06T00:02:31.041Z] Copying: 886/1024 [MB] (13 MBps) [2024-12-06T00:02:31.983Z] Copying: 898/1024 [MB] (12 MBps) [2024-12-06T00:02:33.369Z] Copying: 911/1024 [MB] (13 MBps) [2024-12-06T00:02:34.315Z] Copying: 930/1024 [MB] (18 MBps) [2024-12-06T00:02:35.258Z] Copying: 944/1024 [MB] (14 MBps) [2024-12-06T00:02:36.198Z] Copying: 967/1024 [MB] (22 MBps) [2024-12-06T00:02:37.141Z] Copying: 982/1024 [MB] (14 MBps) [2024-12-06T00:02:38.085Z] Copying: 997/1024 [MB] (15 MBps) [2024-12-06T00:02:39.028Z] Copying: 1016/1024 [MB] (18 MBps) [2024-12-06T00:02:39.028Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-06 00:02:38.974107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.319 [2024-12-06 00:02:38.974324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:06.319 [2024-12-06 00:02:38.974361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:06.319 [2024-12-06 00:02:38.974371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.320 [2024-12-06 00:02:38.975343] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:06.320 [2024-12-06 00:02:38.978918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.320 [2024-12-06 00:02:38.978984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:06.320 [2024-12-06 00:02:38.978998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.538 ms 00:25:06.320 [2024-12-06 00:02:38.979006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.320 [2024-12-06 00:02:38.993723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.320 [2024-12-06 00:02:38.993898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:06.320 [2024-12-06 00:02:38.993919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.622 ms 00:25:06.320 [2024-12-06 00:02:38.993938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.320 [2024-12-06 00:02:39.016643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.320 [2024-12-06 00:02:39.016862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:06.320 [2024-12-06 00:02:39.016986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.680 ms 00:25:06.320 [2024-12-06 00:02:39.017018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.320 [2024-12-06 00:02:39.023265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.320 [2024-12-06 00:02:39.023424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:06.320 [2024-12-06 00:02:39.023599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.188 ms 00:25:06.320 [2024-12-06 00:02:39.023651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.581 [2024-12-06 00:02:39.050180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.581 [2024-12-06 00:02:39.050356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:06.581 [2024-12-06 00:02:39.050487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.449 ms 00:25:06.581 [2024-12-06 00:02:39.050513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.581 [2024-12-06 00:02:39.066804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.581 [2024-12-06 00:02:39.066995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:06.581 [2024-12-06 00:02:39.067062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.209 ms 00:25:06.581 [2024-12-06 00:02:39.067086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.581 [2024-12-06 00:02:39.247602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.581 [2024-12-06 00:02:39.247761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:06.581 [2024-12-06 00:02:39.247819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 180.403 ms 00:25:06.581 [2024-12-06 00:02:39.247843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.581 [2024-12-06 00:02:39.274100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.581 [2024-12-06 00:02:39.274266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:06.581 [2024-12-06 00:02:39.274327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.225 ms 00:25:06.581 [2024-12-06 00:02:39.274349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.844 [2024-12-06 00:02:39.299554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.844 [2024-12-06 00:02:39.299717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:06.844 [2024-12-06 00:02:39.299774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.156 ms 00:25:06.844 [2024-12-06 00:02:39.299796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.844 [2024-12-06 00:02:39.325021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.844 [2024-12-06 00:02:39.325187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:06.844 [2024-12-06 00:02:39.325245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.143 ms 00:25:06.844 [2024-12-06 00:02:39.325266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.844 [2024-12-06 00:02:39.350384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.844 [2024-12-06 00:02:39.350542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:06.844 [2024-12-06 00:02:39.350599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.975 ms 00:25:06.844 [2024-12-06 00:02:39.350621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.844 [2024-12-06 00:02:39.350703] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:06.844 [2024-12-06 00:02:39.350735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 65024 / 261120 wr_cnt: 1 state: open 00:25:06.844 [2024-12-06 00:02:39.350771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.350801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.350829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.350906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.350938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.350982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:06.844 [2024-12-06 00:02:39.351856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:06.845 [2024-12-06 00:02:39.351864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:06.845 [2024-12-06 00:02:39.351871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:06.845 [2024-12-06 00:02:39.351881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:06.845 [2024-12-06 00:02:39.351898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:06.845 [2024-12-06 00:02:39.351906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:06.845 [2024-12-06 00:02:39.351914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:06.845 [2024-12-06 00:02:39.351931] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:06.845 [2024-12-06 00:02:39.351941] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5d1b3a65-8ffa-42fb-a989-b29d7582f516 00:25:06.845 [2024-12-06 00:02:39.351950] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 65024 00:25:06.845 [2024-12-06 00:02:39.351958] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 65984 00:25:06.845 [2024-12-06 00:02:39.351980] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 65024 00:25:06.845 [2024-12-06 00:02:39.351990] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:25:06.845 [2024-12-06 00:02:39.352010] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:06.845 [2024-12-06 00:02:39.352019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:06.845 [2024-12-06 00:02:39.352028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:06.845 [2024-12-06 00:02:39.352035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:06.845 [2024-12-06 00:02:39.352042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:06.845 [2024-12-06 00:02:39.352050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.845 [2024-12-06 00:02:39.352059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:06.845 [2024-12-06 00:02:39.352068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.348 ms 00:25:06.845 [2024-12-06 00:02:39.352075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.845 [2024-12-06 00:02:39.365747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.845 [2024-12-06 00:02:39.365790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:06.845 [2024-12-06 00:02:39.365808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.634 ms 00:25:06.845 [2024-12-06 00:02:39.365816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.845 [2024-12-06 00:02:39.366256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.845 [2024-12-06 00:02:39.366269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:06.845 [2024-12-06 00:02:39.366279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:25:06.845 [2024-12-06 00:02:39.366287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.845 [2024-12-06 00:02:39.402594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.845 [2024-12-06 00:02:39.402642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:06.845 [2024-12-06 00:02:39.402655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.845 [2024-12-06 00:02:39.402664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.845 [2024-12-06 00:02:39.402730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.845 [2024-12-06 00:02:39.402740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:06.845 [2024-12-06 00:02:39.402751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.845 [2024-12-06 00:02:39.402760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.845 [2024-12-06 00:02:39.402828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.845 [2024-12-06 00:02:39.402844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:06.845 [2024-12-06 00:02:39.402854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.845 [2024-12-06 00:02:39.402863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.845 [2024-12-06 00:02:39.402880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.845 [2024-12-06 00:02:39.402889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:06.845 [2024-12-06 00:02:39.402899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.845 [2024-12-06 00:02:39.402908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.845 [2024-12-06 00:02:39.488296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.845 [2024-12-06 00:02:39.488541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:06.845 [2024-12-06 00:02:39.488563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.845 [2024-12-06 00:02:39.488572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.106 [2024-12-06 00:02:39.558341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.106 [2024-12-06 00:02:39.558393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:07.106 [2024-12-06 00:02:39.558404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.106 [2024-12-06 00:02:39.558414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.106 [2024-12-06 00:02:39.558495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.106 [2024-12-06 00:02:39.558506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:07.106 [2024-12-06 00:02:39.558515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.106 [2024-12-06 00:02:39.558527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.106 [2024-12-06 00:02:39.558565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.106 [2024-12-06 00:02:39.558575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:07.106 [2024-12-06 00:02:39.558584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.106 [2024-12-06 00:02:39.558594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.106 [2024-12-06 00:02:39.558691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.106 [2024-12-06 00:02:39.558702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:07.106 [2024-12-06 00:02:39.558711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.106 [2024-12-06 00:02:39.558723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.106 [2024-12-06 00:02:39.558754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.106 [2024-12-06 00:02:39.558764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:07.106 [2024-12-06 00:02:39.558773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.106 [2024-12-06 00:02:39.558781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.106 [2024-12-06 00:02:39.558823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.106 [2024-12-06 00:02:39.558834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:07.106 [2024-12-06 00:02:39.558842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.106 [2024-12-06 00:02:39.558852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.106 [2024-12-06 00:02:39.558902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.106 [2024-12-06 00:02:39.558914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:07.106 [2024-12-06 00:02:39.558923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.106 [2024-12-06 00:02:39.558932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.106 [2024-12-06 00:02:39.559102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 586.511 ms, result 0 00:25:08.051 00:25:08.051 00:25:08.051 00:02:40 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:08.051 [2024-12-06 00:02:40.739505] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:25:08.051 [2024-12-06 00:02:40.739653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80125 ] 00:25:08.313 [2024-12-06 00:02:40.903239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.575 [2024-12-06 00:02:41.021071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.837 [2024-12-06 00:02:41.317617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.837 [2024-12-06 00:02:41.317705] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.837 [2024-12-06 00:02:41.479851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.837 [2024-12-06 00:02:41.479916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:08.837 [2024-12-06 00:02:41.479931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:08.837 [2024-12-06 00:02:41.479940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.837 [2024-12-06 00:02:41.480030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.837 [2024-12-06 00:02:41.480045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.837 [2024-12-06 00:02:41.480054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:08.837 [2024-12-06 00:02:41.480063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.837 [2024-12-06 00:02:41.480085] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:08.837 [2024-12-06 00:02:41.480862] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:08.837 [2024-12-06 00:02:41.480888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.837 [2024-12-06 00:02:41.480898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.837 [2024-12-06 00:02:41.480907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:25:08.837 [2024-12-06 00:02:41.480915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.837 [2024-12-06 00:02:41.482618] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:08.837 [2024-12-06 00:02:41.496606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.837 [2024-12-06 00:02:41.496806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:08.837 [2024-12-06 00:02:41.496829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.991 ms 00:25:08.837 [2024-12-06 00:02:41.496837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.837 [2024-12-06 00:02:41.497051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.837 [2024-12-06 00:02:41.497079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:08.837 [2024-12-06 00:02:41.497091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:08.837 [2024-12-06 00:02:41.497099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.837 [2024-12-06 00:02:41.505155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.837 [2024-12-06 00:02:41.505199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.837 [2024-12-06 00:02:41.505210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.976 ms 00:25:08.837 [2024-12-06 00:02:41.505225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.837 [2024-12-06 00:02:41.505307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.837 [2024-12-06 00:02:41.505316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.837 [2024-12-06 00:02:41.505325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:08.837 [2024-12-06 00:02:41.505333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.837 [2024-12-06 00:02:41.505377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.837 [2024-12-06 00:02:41.505392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:08.837 [2024-12-06 00:02:41.505401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:08.837 [2024-12-06 00:02:41.505408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.837 [2024-12-06 00:02:41.505433] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:08.837 [2024-12-06 00:02:41.509597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.837 [2024-12-06 00:02:41.509634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.837 [2024-12-06 00:02:41.509648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.168 ms 00:25:08.837 [2024-12-06 00:02:41.509656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.837 [2024-12-06 00:02:41.509692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.837 [2024-12-06 00:02:41.509701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:08.837 [2024-12-06 00:02:41.509710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:08.837 [2024-12-06 00:02:41.509718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.837 [2024-12-06 00:02:41.509770] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:08.837 [2024-12-06 00:02:41.509795] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:08.837 [2024-12-06 00:02:41.509832] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:08.837 [2024-12-06 00:02:41.509851] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:08.837 [2024-12-06 00:02:41.509959] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:08.837 [2024-12-06 00:02:41.509990] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:08.837 [2024-12-06 00:02:41.510002] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:08.837 [2024-12-06 00:02:41.510013] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:08.837 [2024-12-06 00:02:41.510023] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:08.837 [2024-12-06 00:02:41.510032] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:08.837 [2024-12-06 00:02:41.510040] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:08.837 [2024-12-06 00:02:41.510051] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:08.837 [2024-12-06 00:02:41.510060] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:08.837 [2024-12-06 00:02:41.510069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-12-06 00:02:41.510077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:08.838 [2024-12-06 00:02:41.510085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:25:08.838 [2024-12-06 00:02:41.510093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-12-06 00:02:41.510177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-12-06 00:02:41.510187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:08.838 [2024-12-06 00:02:41.510194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:08.838 [2024-12-06 00:02:41.510202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-12-06 00:02:41.510308] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:08.838 [2024-12-06 00:02:41.510319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:08.838 [2024-12-06 00:02:41.510327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.838 [2024-12-06 00:02:41.510336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:08.838 [2024-12-06 00:02:41.510351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:08.838 [2024-12-06 00:02:41.510365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:08.838 [2024-12-06 00:02:41.510373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.838 [2024-12-06 00:02:41.510387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:08.838 [2024-12-06 00:02:41.510394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:08.838 [2024-12-06 00:02:41.510400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.838 [2024-12-06 00:02:41.510415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:08.838 [2024-12-06 00:02:41.510422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:08.838 [2024-12-06 00:02:41.510428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:08.838 [2024-12-06 00:02:41.510444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:08.838 [2024-12-06 00:02:41.510453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:08.838 [2024-12-06 00:02:41.510469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.838 [2024-12-06 00:02:41.510482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:08.838 [2024-12-06 00:02:41.510489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.838 [2024-12-06 00:02:41.510503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:08.838 [2024-12-06 00:02:41.510509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.838 [2024-12-06 00:02:41.510524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:08.838 [2024-12-06 00:02:41.510532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.838 [2024-12-06 00:02:41.510546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:08.838 [2024-12-06 00:02:41.510553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.838 [2024-12-06 00:02:41.510567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:08.838 [2024-12-06 00:02:41.510574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:08.838 [2024-12-06 00:02:41.510581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.838 [2024-12-06 00:02:41.510588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:08.838 [2024-12-06 00:02:41.510594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:08.838 [2024-12-06 00:02:41.510601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:08.838 [2024-12-06 00:02:41.510615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:08.838 [2024-12-06 00:02:41.510622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510629] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:08.838 [2024-12-06 00:02:41.510637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:08.838 [2024-12-06 00:02:41.510645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.838 [2024-12-06 00:02:41.510653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.838 [2024-12-06 00:02:41.510666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:08.838 [2024-12-06 00:02:41.510674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:08.838 [2024-12-06 00:02:41.510681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:08.838 [2024-12-06 00:02:41.510689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:08.838 [2024-12-06 00:02:41.510695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:08.838 [2024-12-06 00:02:41.510702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:08.838 [2024-12-06 00:02:41.510711] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:08.838 [2024-12-06 00:02:41.510721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.838 [2024-12-06 00:02:41.510732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:08.838 [2024-12-06 00:02:41.510740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:08.838 [2024-12-06 00:02:41.510748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:08.838 [2024-12-06 00:02:41.510756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:08.838 [2024-12-06 00:02:41.510764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:08.838 [2024-12-06 00:02:41.510772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:08.838 [2024-12-06 00:02:41.510779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:08.838 [2024-12-06 00:02:41.510787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:08.838 [2024-12-06 00:02:41.510794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:08.838 [2024-12-06 00:02:41.510802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:08.838 [2024-12-06 00:02:41.510809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:08.838 [2024-12-06 00:02:41.510816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:08.838 [2024-12-06 00:02:41.510824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:08.838 [2024-12-06 00:02:41.510831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:08.838 [2024-12-06 00:02:41.510838] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:08.838 [2024-12-06 00:02:41.510847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.838 [2024-12-06 00:02:41.510855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:08.838 [2024-12-06 00:02:41.510862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:08.838 [2024-12-06 00:02:41.510871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:08.838 [2024-12-06 00:02:41.510878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:08.838 [2024-12-06 00:02:41.510886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-12-06 00:02:41.510893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:08.838 [2024-12-06 00:02:41.510900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:25:08.838 [2024-12-06 00:02:41.510908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-12-06 00:02:41.542477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-12-06 00:02:41.542530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.838 [2024-12-06 00:02:41.542542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.518 ms 00:25:08.838 [2024-12-06 00:02:41.542554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-12-06 00:02:41.542645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-12-06 00:02:41.542654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:08.838 [2024-12-06 00:02:41.542662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:08.838 [2024-12-06 00:02:41.542670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.102 [2024-12-06 00:02:41.590189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.102 [2024-12-06 00:02:41.590242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:09.102 [2024-12-06 00:02:41.590255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.458 ms 00:25:09.102 [2024-12-06 00:02:41.590264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.102 [2024-12-06 00:02:41.590312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.102 [2024-12-06 00:02:41.590323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:09.102 [2024-12-06 00:02:41.590337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:09.102 [2024-12-06 00:02:41.590345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.102 [2024-12-06 00:02:41.590902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.102 [2024-12-06 00:02:41.590936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:09.102 [2024-12-06 00:02:41.590948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:25:09.102 [2024-12-06 00:02:41.590957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.102 [2024-12-06 00:02:41.591135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.102 [2024-12-06 00:02:41.591201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:09.102 [2024-12-06 00:02:41.591220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:25:09.102 [2024-12-06 00:02:41.591228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.102 [2024-12-06 00:02:41.606855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.607081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.103 [2024-12-06 00:02:41.607102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.603 ms 00:25:09.103 [2024-12-06 00:02:41.607110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.621380] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:09.103 [2024-12-06 00:02:41.621426] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:09.103 [2024-12-06 00:02:41.621440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.621449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:09.103 [2024-12-06 00:02:41.621459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.218 ms 00:25:09.103 [2024-12-06 00:02:41.621466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.647637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.647686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:09.103 [2024-12-06 00:02:41.647698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.119 ms 00:25:09.103 [2024-12-06 00:02:41.647706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.660721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.660767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:09.103 [2024-12-06 00:02:41.660779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.955 ms 00:25:09.103 [2024-12-06 00:02:41.660788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.673618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.673664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:09.103 [2024-12-06 00:02:41.673676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.783 ms 00:25:09.103 [2024-12-06 00:02:41.673685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.674357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.674383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:09.103 [2024-12-06 00:02:41.674396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:25:09.103 [2024-12-06 00:02:41.674405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.740743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.740807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:09.103 [2024-12-06 00:02:41.740830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.318 ms 00:25:09.103 [2024-12-06 00:02:41.740839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.752512] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:09.103 [2024-12-06 00:02:41.755666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.755708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:09.103 [2024-12-06 00:02:41.755720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.771 ms 00:25:09.103 [2024-12-06 00:02:41.755728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.755816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.755827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:09.103 [2024-12-06 00:02:41.755840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:09.103 [2024-12-06 00:02:41.755848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.757284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.757332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:09.103 [2024-12-06 00:02:41.757342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:25:09.103 [2024-12-06 00:02:41.757351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.757379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.757389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:09.103 [2024-12-06 00:02:41.757398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:09.103 [2024-12-06 00:02:41.757406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.757451] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:09.103 [2024-12-06 00:02:41.757462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.757470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:09.103 [2024-12-06 00:02:41.757479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:09.103 [2024-12-06 00:02:41.757488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.783422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.783492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:09.103 [2024-12-06 00:02:41.783512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.914 ms 00:25:09.103 [2024-12-06 00:02:41.783521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.783604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.103 [2024-12-06 00:02:41.783615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:09.103 [2024-12-06 00:02:41.783625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:09.103 [2024-12-06 00:02:41.783633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.103 [2024-12-06 00:02:41.784915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.586 ms, result 0 00:25:10.491  [2024-12-06T00:02:44.145Z] Copying: 5100/1048576 [kB] (5100 kBps) [2024-12-06T00:02:45.112Z] Copying: 15/1024 [MB] (10 MBps) [2024-12-06T00:02:46.054Z] Copying: 31/1024 [MB] (15 MBps) [2024-12-06T00:02:46.996Z] Copying: 43/1024 [MB] (12 MBps) [2024-12-06T00:02:48.381Z] Copying: 58/1024 [MB] (14 MBps) [2024-12-06T00:02:49.326Z] Copying: 72/1024 [MB] (13 MBps) [2024-12-06T00:02:50.269Z] Copying: 88/1024 [MB] (15 MBps) [2024-12-06T00:02:51.212Z] Copying: 99/1024 [MB] (10 MBps) [2024-12-06T00:02:52.153Z] Copying: 119/1024 [MB] (20 MBps) [2024-12-06T00:02:53.128Z] Copying: 134/1024 [MB] (15 MBps) [2024-12-06T00:02:54.123Z] Copying: 145/1024 [MB] (10 MBps) [2024-12-06T00:02:55.065Z] Copying: 155/1024 [MB] (10 MBps) [2024-12-06T00:02:56.010Z] Copying: 177/1024 [MB] (21 MBps) [2024-12-06T00:02:57.396Z] Copying: 189/1024 [MB] (12 MBps) [2024-12-06T00:02:58.340Z] Copying: 209/1024 [MB] (20 MBps) [2024-12-06T00:02:59.284Z] Copying: 228/1024 [MB] (18 MBps) [2024-12-06T00:03:00.229Z] Copying: 243/1024 [MB] (15 MBps) [2024-12-06T00:03:01.173Z] Copying: 260/1024 [MB] (16 MBps) [2024-12-06T00:03:02.117Z] Copying: 284/1024 [MB] (24 MBps) [2024-12-06T00:03:03.060Z] Copying: 312/1024 [MB] (27 MBps) [2024-12-06T00:03:04.002Z] Copying: 341/1024 [MB] (28 MBps) [2024-12-06T00:03:05.395Z] Copying: 364/1024 [MB] (23 MBps) [2024-12-06T00:03:06.336Z] Copying: 386/1024 [MB] (21 MBps) [2024-12-06T00:03:07.280Z] Copying: 396/1024 [MB] (10 MBps) [2024-12-06T00:03:08.225Z] Copying: 407/1024 [MB] (10 MBps) [2024-12-06T00:03:09.171Z] Copying: 421/1024 [MB] (14 MBps) [2024-12-06T00:03:10.115Z] Copying: 432/1024 [MB] (10 MBps) [2024-12-06T00:03:11.059Z] Copying: 443/1024 [MB] (10 MBps) [2024-12-06T00:03:12.001Z] Copying: 457/1024 [MB] (14 MBps) [2024-12-06T00:03:13.384Z] Copying: 475/1024 [MB] (17 MBps) [2024-12-06T00:03:14.325Z] Copying: 485/1024 [MB] (10 MBps) [2024-12-06T00:03:15.267Z] Copying: 495/1024 [MB] (10 MBps) [2024-12-06T00:03:16.208Z] Copying: 517/1024 [MB] (21 MBps) [2024-12-06T00:03:17.150Z] Copying: 532/1024 [MB] (14 MBps) [2024-12-06T00:03:18.093Z] Copying: 545/1024 [MB] (13 MBps) [2024-12-06T00:03:19.036Z] Copying: 561/1024 [MB] (16 MBps) [2024-12-06T00:03:19.977Z] Copying: 581/1024 [MB] (20 MBps) [2024-12-06T00:03:21.365Z] Copying: 602/1024 [MB] (20 MBps) [2024-12-06T00:03:22.386Z] Copying: 619/1024 [MB] (17 MBps) [2024-12-06T00:03:22.993Z] Copying: 641/1024 [MB] (21 MBps) [2024-12-06T00:03:24.382Z] Copying: 658/1024 [MB] (17 MBps) [2024-12-06T00:03:25.325Z] Copying: 675/1024 [MB] (16 MBps) [2024-12-06T00:03:26.266Z] Copying: 695/1024 [MB] (19 MBps) [2024-12-06T00:03:27.212Z] Copying: 709/1024 [MB] (14 MBps) [2024-12-06T00:03:28.157Z] Copying: 720/1024 [MB] (10 MBps) [2024-12-06T00:03:29.104Z] Copying: 730/1024 [MB] (10 MBps) [2024-12-06T00:03:30.049Z] Copying: 740/1024 [MB] (10 MBps) [2024-12-06T00:03:30.995Z] Copying: 751/1024 [MB] (10 MBps) [2024-12-06T00:03:32.379Z] Copying: 761/1024 [MB] (10 MBps) [2024-12-06T00:03:33.323Z] Copying: 773/1024 [MB] (11 MBps) [2024-12-06T00:03:34.269Z] Copying: 784/1024 [MB] (10 MBps) [2024-12-06T00:03:35.210Z] Copying: 797/1024 [MB] (13 MBps) [2024-12-06T00:03:36.150Z] Copying: 808/1024 [MB] (10 MBps) [2024-12-06T00:03:37.094Z] Copying: 825/1024 [MB] (17 MBps) [2024-12-06T00:03:38.038Z] Copying: 836/1024 [MB] (10 MBps) [2024-12-06T00:03:38.983Z] Copying: 847/1024 [MB] (10 MBps) [2024-12-06T00:03:40.372Z] Copying: 857/1024 [MB] (10 MBps) [2024-12-06T00:03:41.317Z] Copying: 868/1024 [MB] (10 MBps) [2024-12-06T00:03:42.261Z] Copying: 882/1024 [MB] (13 MBps) [2024-12-06T00:03:43.205Z] Copying: 893/1024 [MB] (11 MBps) [2024-12-06T00:03:44.149Z] Copying: 904/1024 [MB] (11 MBps) [2024-12-06T00:03:45.092Z] Copying: 915/1024 [MB] (11 MBps) [2024-12-06T00:03:46.031Z] Copying: 936/1024 [MB] (20 MBps) [2024-12-06T00:03:47.417Z] Copying: 958/1024 [MB] (21 MBps) [2024-12-06T00:03:47.989Z] Copying: 978/1024 [MB] (19 MBps) [2024-12-06T00:03:49.372Z] Copying: 997/1024 [MB] (19 MBps) [2024-12-06T00:03:49.372Z] Copying: 1023/1024 [MB] (25 MBps) [2024-12-06T00:03:49.632Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-06 00:03:49.431122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.924 [2024-12-06 00:03:49.431245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:16.924 [2024-12-06 00:03:49.431284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:16.924 [2024-12-06 00:03:49.431304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.924 [2024-12-06 00:03:49.431351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:16.924 [2024-12-06 00:03:49.438441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.924 [2024-12-06 00:03:49.438661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:16.924 [2024-12-06 00:03:49.438923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.055 ms 00:26:16.924 [2024-12-06 00:03:49.439019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.924 [2024-12-06 00:03:49.439459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.924 [2024-12-06 00:03:49.439516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:16.924 [2024-12-06 00:03:49.439555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:26:16.924 [2024-12-06 00:03:49.439581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.924 [2024-12-06 00:03:49.449768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.924 [2024-12-06 00:03:49.449932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:16.924 [2024-12-06 00:03:49.449954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.160 ms 00:26:16.924 [2024-12-06 00:03:49.449963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.924 [2024-12-06 00:03:49.456226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.924 [2024-12-06 00:03:49.456447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:16.924 [2024-12-06 00:03:49.456466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.204 ms 00:26:16.924 [2024-12-06 00:03:49.456482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.924 [2024-12-06 00:03:49.483279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.924 [2024-12-06 00:03:49.483330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:16.924 [2024-12-06 00:03:49.483343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.733 ms 00:26:16.924 [2024-12-06 00:03:49.483351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.924 [2024-12-06 00:03:49.499118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.924 [2024-12-06 00:03:49.499300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:16.924 [2024-12-06 00:03:49.499321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.722 ms 00:26:16.924 [2024-12-06 00:03:49.499330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.186 [2024-12-06 00:03:49.744728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.186 [2024-12-06 00:03:49.744792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:17.186 [2024-12-06 00:03:49.744806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 245.256 ms 00:26:17.186 [2024-12-06 00:03:49.744814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.186 [2024-12-06 00:03:49.770595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.186 [2024-12-06 00:03:49.770778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:17.186 [2024-12-06 00:03:49.770798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.764 ms 00:26:17.186 [2024-12-06 00:03:49.770807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.186 [2024-12-06 00:03:49.796583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.186 [2024-12-06 00:03:49.796636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:17.186 [2024-12-06 00:03:49.796649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.668 ms 00:26:17.186 [2024-12-06 00:03:49.796656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.186 [2024-12-06 00:03:49.821746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.186 [2024-12-06 00:03:49.821791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:17.186 [2024-12-06 00:03:49.821803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.047 ms 00:26:17.186 [2024-12-06 00:03:49.821810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.186 [2024-12-06 00:03:49.846474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.186 [2024-12-06 00:03:49.846517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:17.186 [2024-12-06 00:03:49.846530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.594 ms 00:26:17.186 [2024-12-06 00:03:49.846537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.186 [2024-12-06 00:03:49.846579] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:17.186 [2024-12-06 00:03:49.846595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:26:17.186 [2024-12-06 00:03:49.846606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:17.186 [2024-12-06 00:03:49.846990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.846999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:17.187 [2024-12-06 00:03:49.847448] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:17.187 [2024-12-06 00:03:49.847457] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5d1b3a65-8ffa-42fb-a989-b29d7582f516 00:26:17.187 [2024-12-06 00:03:49.847466] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:26:17.187 [2024-12-06 00:03:49.847474] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 67008 00:26:17.187 [2024-12-06 00:03:49.847481] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 66048 00:26:17.187 [2024-12-06 00:03:49.847490] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0145 00:26:17.187 [2024-12-06 00:03:49.847501] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:17.187 [2024-12-06 00:03:49.847516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:17.187 [2024-12-06 00:03:49.847523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:17.187 [2024-12-06 00:03:49.847531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:17.187 [2024-12-06 00:03:49.847537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:17.187 [2024-12-06 00:03:49.847545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.187 [2024-12-06 00:03:49.847553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:17.187 [2024-12-06 00:03:49.847562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:26:17.187 [2024-12-06 00:03:49.847571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.187 [2024-12-06 00:03:49.861040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.187 [2024-12-06 00:03:49.861082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:17.187 [2024-12-06 00:03:49.861101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.451 ms 00:26:17.187 [2024-12-06 00:03:49.861109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.187 [2024-12-06 00:03:49.861511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.187 [2024-12-06 00:03:49.861525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:17.187 [2024-12-06 00:03:49.861534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:26:17.187 [2024-12-06 00:03:49.861542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-12-06 00:03:49.898029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.447 [2024-12-06 00:03:49.898084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.447 [2024-12-06 00:03:49.898097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.447 [2024-12-06 00:03:49.898108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-12-06 00:03:49.898174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.447 [2024-12-06 00:03:49.898184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.447 [2024-12-06 00:03:49.898194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.447 [2024-12-06 00:03:49.898204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.447 [2024-12-06 00:03:49.898267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.448 [2024-12-06 00:03:49.898279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.448 [2024-12-06 00:03:49.898293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.448 [2024-12-06 00:03:49.898302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.448 [2024-12-06 00:03:49.898318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.448 [2024-12-06 00:03:49.898328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.448 [2024-12-06 00:03:49.898337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.448 [2024-12-06 00:03:49.898347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.448 [2024-12-06 00:03:49.985415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.448 [2024-12-06 00:03:49.985481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.448 [2024-12-06 00:03:49.985497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.448 [2024-12-06 00:03:49.985506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.448 [2024-12-06 00:03:50.055251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.448 [2024-12-06 00:03:50.055316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.448 [2024-12-06 00:03:50.055330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.448 [2024-12-06 00:03:50.055340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.448 [2024-12-06 00:03:50.055431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.448 [2024-12-06 00:03:50.055443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.448 [2024-12-06 00:03:50.055452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.448 [2024-12-06 00:03:50.055468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.448 [2024-12-06 00:03:50.055510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.448 [2024-12-06 00:03:50.055520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.448 [2024-12-06 00:03:50.055529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.448 [2024-12-06 00:03:50.055537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.448 [2024-12-06 00:03:50.055637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.448 [2024-12-06 00:03:50.055648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.448 [2024-12-06 00:03:50.055657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.448 [2024-12-06 00:03:50.055666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.448 [2024-12-06 00:03:50.055701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.448 [2024-12-06 00:03:50.055711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:17.448 [2024-12-06 00:03:50.055720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.448 [2024-12-06 00:03:50.055728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.448 [2024-12-06 00:03:50.055772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.448 [2024-12-06 00:03:50.055782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.448 [2024-12-06 00:03:50.055791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.448 [2024-12-06 00:03:50.055800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.448 [2024-12-06 00:03:50.055852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.448 [2024-12-06 00:03:50.055862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.448 [2024-12-06 00:03:50.055871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.448 [2024-12-06 00:03:50.055880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.448 [2024-12-06 00:03:50.056057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 624.881 ms, result 0 00:26:18.437 00:26:18.437 00:26:18.437 00:03:50 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:20.361 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:20.361 00:03:52 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:20.361 00:03:52 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:20.361 00:03:52 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:20.622 00:03:53 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:20.623 00:03:53 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:20.623 00:03:53 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77385 00:26:20.623 Process with pid 77385 is not found 00:26:20.623 Remove shared memory files 00:26:20.623 00:03:53 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77385 ']' 00:26:20.623 00:03:53 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77385 00:26:20.623 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77385) - No such process 00:26:20.623 00:03:53 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77385 is not found' 00:26:20.623 00:03:53 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:20.623 00:03:53 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:20.623 00:03:53 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:20.623 00:03:53 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:20.623 00:03:53 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:20.623 00:03:53 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:20.623 00:03:53 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:20.623 ************************************ 00:26:20.623 END TEST ftl_restore 00:26:20.623 ************************************ 00:26:20.623 00:26:20.623 real 5m33.695s 00:26:20.623 user 5m20.490s 00:26:20.623 sys 0m12.704s 00:26:20.623 00:03:53 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.623 00:03:53 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:20.623 00:03:53 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:20.623 00:03:53 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:20.623 00:03:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.623 00:03:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:20.623 ************************************ 00:26:20.623 START TEST ftl_dirty_shutdown 00:26:20.623 ************************************ 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:20.623 * Looking for test storage... 00:26:20.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:20.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.623 --rc genhtml_branch_coverage=1 00:26:20.623 --rc genhtml_function_coverage=1 00:26:20.623 --rc genhtml_legend=1 00:26:20.623 --rc geninfo_all_blocks=1 00:26:20.623 --rc geninfo_unexecuted_blocks=1 00:26:20.623 00:26:20.623 ' 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:20.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.623 --rc genhtml_branch_coverage=1 00:26:20.623 --rc genhtml_function_coverage=1 00:26:20.623 --rc genhtml_legend=1 00:26:20.623 --rc geninfo_all_blocks=1 00:26:20.623 --rc geninfo_unexecuted_blocks=1 00:26:20.623 00:26:20.623 ' 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:20.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.623 --rc genhtml_branch_coverage=1 00:26:20.623 --rc genhtml_function_coverage=1 00:26:20.623 --rc genhtml_legend=1 00:26:20.623 --rc geninfo_all_blocks=1 00:26:20.623 --rc geninfo_unexecuted_blocks=1 00:26:20.623 00:26:20.623 ' 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:20.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.623 --rc genhtml_branch_coverage=1 00:26:20.623 --rc genhtml_function_coverage=1 00:26:20.623 --rc genhtml_legend=1 00:26:20.623 --rc geninfo_all_blocks=1 00:26:20.623 --rc geninfo_unexecuted_blocks=1 00:26:20.623 00:26:20.623 ' 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:20.623 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:20.884 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80927 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80927 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80927 ']' 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:20.885 00:03:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:20.885 [2024-12-06 00:03:53.424892] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:26:20.885 [2024-12-06 00:03:53.425057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80927 ] 00:26:20.885 [2024-12-06 00:03:53.589090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.146 [2024-12-06 00:03:53.715634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.720 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.720 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:21.720 00:03:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:21.720 00:03:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:21.720 00:03:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:21.720 00:03:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:21.720 00:03:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:21.720 00:03:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:22.291 { 00:26:22.291 "name": "nvme0n1", 00:26:22.291 "aliases": [ 00:26:22.291 "63b5657f-ac18-4d50-9b67-e9c58708c46d" 00:26:22.291 ], 00:26:22.291 "product_name": "NVMe disk", 00:26:22.291 "block_size": 4096, 00:26:22.291 "num_blocks": 1310720, 00:26:22.291 "uuid": "63b5657f-ac18-4d50-9b67-e9c58708c46d", 00:26:22.291 "numa_id": -1, 00:26:22.291 "assigned_rate_limits": { 00:26:22.291 "rw_ios_per_sec": 0, 00:26:22.291 "rw_mbytes_per_sec": 0, 00:26:22.291 "r_mbytes_per_sec": 0, 00:26:22.291 "w_mbytes_per_sec": 0 00:26:22.291 }, 00:26:22.291 "claimed": true, 00:26:22.291 "claim_type": "read_many_write_one", 00:26:22.291 "zoned": false, 00:26:22.291 "supported_io_types": { 00:26:22.291 "read": true, 00:26:22.291 "write": true, 00:26:22.291 "unmap": true, 00:26:22.291 "flush": true, 00:26:22.291 "reset": true, 00:26:22.291 "nvme_admin": true, 00:26:22.291 "nvme_io": true, 00:26:22.291 "nvme_io_md": false, 00:26:22.291 "write_zeroes": true, 00:26:22.291 "zcopy": false, 00:26:22.291 "get_zone_info": false, 00:26:22.291 "zone_management": false, 00:26:22.291 "zone_append": false, 00:26:22.291 "compare": true, 00:26:22.291 "compare_and_write": false, 00:26:22.291 "abort": true, 00:26:22.291 "seek_hole": false, 00:26:22.291 "seek_data": false, 00:26:22.291 "copy": true, 00:26:22.291 "nvme_iov_md": false 00:26:22.291 }, 00:26:22.291 "driver_specific": { 00:26:22.291 "nvme": [ 00:26:22.291 { 00:26:22.291 "pci_address": "0000:00:11.0", 00:26:22.291 "trid": { 00:26:22.291 "trtype": "PCIe", 00:26:22.291 "traddr": "0000:00:11.0" 00:26:22.291 }, 00:26:22.291 "ctrlr_data": { 00:26:22.291 "cntlid": 0, 00:26:22.291 "vendor_id": "0x1b36", 00:26:22.291 "model_number": "QEMU NVMe Ctrl", 00:26:22.291 "serial_number": "12341", 00:26:22.291 "firmware_revision": "8.0.0", 00:26:22.291 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:22.291 "oacs": { 00:26:22.291 "security": 0, 00:26:22.291 "format": 1, 00:26:22.291 "firmware": 0, 00:26:22.291 "ns_manage": 1 00:26:22.291 }, 00:26:22.291 "multi_ctrlr": false, 00:26:22.291 "ana_reporting": false 00:26:22.291 }, 00:26:22.291 "vs": { 00:26:22.291 "nvme_version": "1.4" 00:26:22.291 }, 00:26:22.291 "ns_data": { 00:26:22.291 "id": 1, 00:26:22.291 "can_share": false 00:26:22.291 } 00:26:22.291 } 00:26:22.291 ], 00:26:22.291 "mp_policy": "active_passive" 00:26:22.291 } 00:26:22.291 } 00:26:22.291 ]' 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:22.291 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:22.559 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:22.559 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:22.559 00:03:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:22.559 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:22.559 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:22.559 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:22.559 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:22.559 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:22.559 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=26beb0a2-0f49-45a2-975d-948f66becf95 00:26:22.559 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:22.560 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 26beb0a2-0f49-45a2-975d-948f66becf95 00:26:22.823 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:23.083 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=e09870e9-03a2-4186-879e-099f7ea4cc86 00:26:23.083 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e09870e9-03a2-4186-879e-099f7ea4cc86 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:23.342 00:03:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:23.601 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:23.601 { 00:26:23.601 "name": "add50ef1-7ce5-44fd-96cf-bb8083ce9542", 00:26:23.601 "aliases": [ 00:26:23.601 "lvs/nvme0n1p0" 00:26:23.601 ], 00:26:23.601 "product_name": "Logical Volume", 00:26:23.601 "block_size": 4096, 00:26:23.601 "num_blocks": 26476544, 00:26:23.601 "uuid": "add50ef1-7ce5-44fd-96cf-bb8083ce9542", 00:26:23.601 "assigned_rate_limits": { 00:26:23.601 "rw_ios_per_sec": 0, 00:26:23.601 "rw_mbytes_per_sec": 0, 00:26:23.601 "r_mbytes_per_sec": 0, 00:26:23.601 "w_mbytes_per_sec": 0 00:26:23.601 }, 00:26:23.601 "claimed": false, 00:26:23.601 "zoned": false, 00:26:23.601 "supported_io_types": { 00:26:23.601 "read": true, 00:26:23.601 "write": true, 00:26:23.601 "unmap": true, 00:26:23.601 "flush": false, 00:26:23.601 "reset": true, 00:26:23.601 "nvme_admin": false, 00:26:23.601 "nvme_io": false, 00:26:23.601 "nvme_io_md": false, 00:26:23.601 "write_zeroes": true, 00:26:23.601 "zcopy": false, 00:26:23.601 "get_zone_info": false, 00:26:23.601 "zone_management": false, 00:26:23.601 "zone_append": false, 00:26:23.601 "compare": false, 00:26:23.601 "compare_and_write": false, 00:26:23.601 "abort": false, 00:26:23.601 "seek_hole": true, 00:26:23.601 "seek_data": true, 00:26:23.601 "copy": false, 00:26:23.601 "nvme_iov_md": false 00:26:23.601 }, 00:26:23.601 "driver_specific": { 00:26:23.601 "lvol": { 00:26:23.601 "lvol_store_uuid": "e09870e9-03a2-4186-879e-099f7ea4cc86", 00:26:23.601 "base_bdev": "nvme0n1", 00:26:23.601 "thin_provision": true, 00:26:23.601 "num_allocated_clusters": 0, 00:26:23.601 "snapshot": false, 00:26:23.601 "clone": false, 00:26:23.601 "esnap_clone": false 00:26:23.601 } 00:26:23.601 } 00:26:23.601 } 00:26:23.601 ]' 00:26:23.601 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:23.601 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:23.601 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:23.601 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:23.601 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:23.601 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:23.601 00:03:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:23.601 00:03:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:23.601 00:03:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:23.861 00:03:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:23.861 00:03:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:23.861 00:03:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:23.861 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:23.861 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:23.861 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:23.861 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:23.861 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:24.124 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:24.124 { 00:26:24.124 "name": "add50ef1-7ce5-44fd-96cf-bb8083ce9542", 00:26:24.124 "aliases": [ 00:26:24.124 "lvs/nvme0n1p0" 00:26:24.124 ], 00:26:24.124 "product_name": "Logical Volume", 00:26:24.124 "block_size": 4096, 00:26:24.124 "num_blocks": 26476544, 00:26:24.124 "uuid": "add50ef1-7ce5-44fd-96cf-bb8083ce9542", 00:26:24.124 "assigned_rate_limits": { 00:26:24.124 "rw_ios_per_sec": 0, 00:26:24.124 "rw_mbytes_per_sec": 0, 00:26:24.124 "r_mbytes_per_sec": 0, 00:26:24.124 "w_mbytes_per_sec": 0 00:26:24.124 }, 00:26:24.124 "claimed": false, 00:26:24.124 "zoned": false, 00:26:24.124 "supported_io_types": { 00:26:24.124 "read": true, 00:26:24.124 "write": true, 00:26:24.124 "unmap": true, 00:26:24.124 "flush": false, 00:26:24.124 "reset": true, 00:26:24.124 "nvme_admin": false, 00:26:24.124 "nvme_io": false, 00:26:24.124 "nvme_io_md": false, 00:26:24.124 "write_zeroes": true, 00:26:24.124 "zcopy": false, 00:26:24.124 "get_zone_info": false, 00:26:24.124 "zone_management": false, 00:26:24.124 "zone_append": false, 00:26:24.124 "compare": false, 00:26:24.124 "compare_and_write": false, 00:26:24.124 "abort": false, 00:26:24.124 "seek_hole": true, 00:26:24.124 "seek_data": true, 00:26:24.124 "copy": false, 00:26:24.124 "nvme_iov_md": false 00:26:24.124 }, 00:26:24.124 "driver_specific": { 00:26:24.124 "lvol": { 00:26:24.124 "lvol_store_uuid": "e09870e9-03a2-4186-879e-099f7ea4cc86", 00:26:24.124 "base_bdev": "nvme0n1", 00:26:24.124 "thin_provision": true, 00:26:24.124 "num_allocated_clusters": 0, 00:26:24.124 "snapshot": false, 00:26:24.124 "clone": false, 00:26:24.124 "esnap_clone": false 00:26:24.124 } 00:26:24.124 } 00:26:24.124 } 00:26:24.124 ]' 00:26:24.124 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:24.124 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:24.124 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:24.124 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:24.124 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:24.124 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:24.124 00:03:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:24.124 00:03:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:24.382 00:03:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:24.382 00:03:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:24.382 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:24.382 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:24.382 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:24.382 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:24.382 00:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b add50ef1-7ce5-44fd-96cf-bb8083ce9542 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:24.641 { 00:26:24.641 "name": "add50ef1-7ce5-44fd-96cf-bb8083ce9542", 00:26:24.641 "aliases": [ 00:26:24.641 "lvs/nvme0n1p0" 00:26:24.641 ], 00:26:24.641 "product_name": "Logical Volume", 00:26:24.641 "block_size": 4096, 00:26:24.641 "num_blocks": 26476544, 00:26:24.641 "uuid": "add50ef1-7ce5-44fd-96cf-bb8083ce9542", 00:26:24.641 "assigned_rate_limits": { 00:26:24.641 "rw_ios_per_sec": 0, 00:26:24.641 "rw_mbytes_per_sec": 0, 00:26:24.641 "r_mbytes_per_sec": 0, 00:26:24.641 "w_mbytes_per_sec": 0 00:26:24.641 }, 00:26:24.641 "claimed": false, 00:26:24.641 "zoned": false, 00:26:24.641 "supported_io_types": { 00:26:24.641 "read": true, 00:26:24.641 "write": true, 00:26:24.641 "unmap": true, 00:26:24.641 "flush": false, 00:26:24.641 "reset": true, 00:26:24.641 "nvme_admin": false, 00:26:24.641 "nvme_io": false, 00:26:24.641 "nvme_io_md": false, 00:26:24.641 "write_zeroes": true, 00:26:24.641 "zcopy": false, 00:26:24.641 "get_zone_info": false, 00:26:24.641 "zone_management": false, 00:26:24.641 "zone_append": false, 00:26:24.641 "compare": false, 00:26:24.641 "compare_and_write": false, 00:26:24.641 "abort": false, 00:26:24.641 "seek_hole": true, 00:26:24.641 "seek_data": true, 00:26:24.641 "copy": false, 00:26:24.641 "nvme_iov_md": false 00:26:24.641 }, 00:26:24.641 "driver_specific": { 00:26:24.641 "lvol": { 00:26:24.641 "lvol_store_uuid": "e09870e9-03a2-4186-879e-099f7ea4cc86", 00:26:24.641 "base_bdev": "nvme0n1", 00:26:24.641 "thin_provision": true, 00:26:24.641 "num_allocated_clusters": 0, 00:26:24.641 "snapshot": false, 00:26:24.641 "clone": false, 00:26:24.641 "esnap_clone": false 00:26:24.641 } 00:26:24.641 } 00:26:24.641 } 00:26:24.641 ]' 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d add50ef1-7ce5-44fd-96cf-bb8083ce9542 --l2p_dram_limit 10' 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:24.641 00:03:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d add50ef1-7ce5-44fd-96cf-bb8083ce9542 --l2p_dram_limit 10 -c nvc0n1p0 00:26:24.902 [2024-12-06 00:03:57.372876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.372912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:24.902 [2024-12-06 00:03:57.372924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:24.902 [2024-12-06 00:03:57.372931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.372989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.372997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:24.902 [2024-12-06 00:03:57.373005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:24.902 [2024-12-06 00:03:57.373011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.373029] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:24.902 [2024-12-06 00:03:57.373619] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:24.902 [2024-12-06 00:03:57.373635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.373641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:24.902 [2024-12-06 00:03:57.373649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:26:24.902 [2024-12-06 00:03:57.373654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.373703] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 93384091-e3d9-48f1-bf76-2666e57b6b04 00:26:24.902 [2024-12-06 00:03:57.374623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.374651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:24.902 [2024-12-06 00:03:57.374659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:24.902 [2024-12-06 00:03:57.374666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.379344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.379373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:24.902 [2024-12-06 00:03:57.379381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:26:24.902 [2024-12-06 00:03:57.379388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.379455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.379464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:24.902 [2024-12-06 00:03:57.379470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:24.902 [2024-12-06 00:03:57.379479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.379512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.379521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:24.902 [2024-12-06 00:03:57.379530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:24.902 [2024-12-06 00:03:57.379537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.379553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:24.902 [2024-12-06 00:03:57.382428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.382535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:24.902 [2024-12-06 00:03:57.382552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:26:24.902 [2024-12-06 00:03:57.382558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.382588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.382595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:24.902 [2024-12-06 00:03:57.382602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:24.902 [2024-12-06 00:03:57.382608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.382629] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:24.902 [2024-12-06 00:03:57.382738] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:24.902 [2024-12-06 00:03:57.382751] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:24.902 [2024-12-06 00:03:57.382760] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:24.902 [2024-12-06 00:03:57.382769] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:24.902 [2024-12-06 00:03:57.382776] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:24.902 [2024-12-06 00:03:57.382784] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:24.902 [2024-12-06 00:03:57.382789] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:24.902 [2024-12-06 00:03:57.382800] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:24.902 [2024-12-06 00:03:57.382806] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:24.902 [2024-12-06 00:03:57.382813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.382824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:24.902 [2024-12-06 00:03:57.382831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:26:24.902 [2024-12-06 00:03:57.382837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.382903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.902 [2024-12-06 00:03:57.382910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:24.902 [2024-12-06 00:03:57.382917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:24.902 [2024-12-06 00:03:57.382922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.902 [2024-12-06 00:03:57.383015] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:24.902 [2024-12-06 00:03:57.383024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:24.902 [2024-12-06 00:03:57.383032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:24.902 [2024-12-06 00:03:57.383038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.902 [2024-12-06 00:03:57.383046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:24.902 [2024-12-06 00:03:57.383051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:24.903 [2024-12-06 00:03:57.383065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:24.903 [2024-12-06 00:03:57.383071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:24.903 [2024-12-06 00:03:57.383084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:24.903 [2024-12-06 00:03:57.383089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:24.903 [2024-12-06 00:03:57.383096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:24.903 [2024-12-06 00:03:57.383102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:24.903 [2024-12-06 00:03:57.383109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:24.903 [2024-12-06 00:03:57.383114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:24.903 [2024-12-06 00:03:57.383127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:24.903 [2024-12-06 00:03:57.383133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:24.903 [2024-12-06 00:03:57.383145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.903 [2024-12-06 00:03:57.383156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:24.903 [2024-12-06 00:03:57.383162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.903 [2024-12-06 00:03:57.383172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:24.903 [2024-12-06 00:03:57.383178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.903 [2024-12-06 00:03:57.383189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:24.903 [2024-12-06 00:03:57.383194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:24.903 [2024-12-06 00:03:57.383206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:24.903 [2024-12-06 00:03:57.383214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:24.903 [2024-12-06 00:03:57.383225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:24.903 [2024-12-06 00:03:57.383230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:24.903 [2024-12-06 00:03:57.383238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:24.903 [2024-12-06 00:03:57.383242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:24.903 [2024-12-06 00:03:57.383250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:24.903 [2024-12-06 00:03:57.383255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:24.903 [2024-12-06 00:03:57.383266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:24.903 [2024-12-06 00:03:57.383272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383277] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:24.903 [2024-12-06 00:03:57.383285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:24.903 [2024-12-06 00:03:57.383290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:24.903 [2024-12-06 00:03:57.383299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.903 [2024-12-06 00:03:57.383306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:24.903 [2024-12-06 00:03:57.383314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:24.903 [2024-12-06 00:03:57.383319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:24.903 [2024-12-06 00:03:57.383325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:24.903 [2024-12-06 00:03:57.383330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:24.903 [2024-12-06 00:03:57.383336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:24.903 [2024-12-06 00:03:57.383343] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:24.903 [2024-12-06 00:03:57.383353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.903 [2024-12-06 00:03:57.383360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:24.903 [2024-12-06 00:03:57.383367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:24.903 [2024-12-06 00:03:57.383372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:24.903 [2024-12-06 00:03:57.383379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:24.903 [2024-12-06 00:03:57.383384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:24.903 [2024-12-06 00:03:57.383391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:24.903 [2024-12-06 00:03:57.383397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:24.903 [2024-12-06 00:03:57.383405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:24.903 [2024-12-06 00:03:57.383410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:24.903 [2024-12-06 00:03:57.383418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:24.903 [2024-12-06 00:03:57.383423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:24.903 [2024-12-06 00:03:57.383430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:24.903 [2024-12-06 00:03:57.383436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:24.903 [2024-12-06 00:03:57.383443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:24.903 [2024-12-06 00:03:57.383449] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:24.903 [2024-12-06 00:03:57.383456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.903 [2024-12-06 00:03:57.383462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:24.903 [2024-12-06 00:03:57.383469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:24.903 [2024-12-06 00:03:57.383474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:24.903 [2024-12-06 00:03:57.383482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:24.903 [2024-12-06 00:03:57.383487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.903 [2024-12-06 00:03:57.383495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:24.903 [2024-12-06 00:03:57.383500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:26:24.903 [2024-12-06 00:03:57.383507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.903 [2024-12-06 00:03:57.383548] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:24.903 [2024-12-06 00:03:57.383560] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:30.188 [2024-12-06 00:04:02.750122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.188 [2024-12-06 00:04:02.750210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:30.188 [2024-12-06 00:04:02.750228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5366.554 ms 00:26:30.188 [2024-12-06 00:04:02.750240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.188 [2024-12-06 00:04:02.782216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.188 [2024-12-06 00:04:02.782283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:30.188 [2024-12-06 00:04:02.782297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.729 ms 00:26:30.188 [2024-12-06 00:04:02.782308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.188 [2024-12-06 00:04:02.782446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.188 [2024-12-06 00:04:02.782460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:30.188 [2024-12-06 00:04:02.782470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:30.188 [2024-12-06 00:04:02.782488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.188 [2024-12-06 00:04:02.817627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.188 [2024-12-06 00:04:02.817923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:30.188 [2024-12-06 00:04:02.817939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.082 ms 00:26:30.188 [2024-12-06 00:04:02.817950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.188 [2024-12-06 00:04:02.818019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.188 [2024-12-06 00:04:02.818038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:30.188 [2024-12-06 00:04:02.818050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:30.188 [2024-12-06 00:04:02.818067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.188 [2024-12-06 00:04:02.818644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.188 [2024-12-06 00:04:02.818686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:30.188 [2024-12-06 00:04:02.818698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:26:30.188 [2024-12-06 00:04:02.818709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.188 [2024-12-06 00:04:02.818825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.188 [2024-12-06 00:04:02.818838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:30.188 [2024-12-06 00:04:02.818850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:26:30.188 [2024-12-06 00:04:02.818863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.188 [2024-12-06 00:04:02.836125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.188 [2024-12-06 00:04:02.836348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:30.188 [2024-12-06 00:04:02.836368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.241 ms 00:26:30.188 [2024-12-06 00:04:02.836380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.188 [2024-12-06 00:04:02.865687] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:30.188 [2024-12-06 00:04:02.869658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.188 [2024-12-06 00:04:02.869705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:30.188 [2024-12-06 00:04:02.869720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.185 ms 00:26:30.188 [2024-12-06 00:04:02.869729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.450 [2024-12-06 00:04:03.059716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.450 [2024-12-06 00:04:03.059772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:30.450 [2024-12-06 00:04:03.059789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 189.940 ms 00:26:30.450 [2024-12-06 00:04:03.059798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.450 [2024-12-06 00:04:03.060043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.450 [2024-12-06 00:04:03.060061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:30.450 [2024-12-06 00:04:03.060077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:26:30.450 [2024-12-06 00:04:03.060086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.450 [2024-12-06 00:04:03.086496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.450 [2024-12-06 00:04:03.086545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:30.450 [2024-12-06 00:04:03.086562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.350 ms 00:26:30.450 [2024-12-06 00:04:03.086571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.450 [2024-12-06 00:04:03.112123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.450 [2024-12-06 00:04:03.112351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:30.450 [2024-12-06 00:04:03.112380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.492 ms 00:26:30.450 [2024-12-06 00:04:03.112389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.450 [2024-12-06 00:04:03.113021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.450 [2024-12-06 00:04:03.113047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:30.450 [2024-12-06 00:04:03.113060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:26:30.450 [2024-12-06 00:04:03.113072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.711 [2024-12-06 00:04:03.199423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.711 [2024-12-06 00:04:03.199571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:30.711 [2024-12-06 00:04:03.199597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.290 ms 00:26:30.711 [2024-12-06 00:04:03.199606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.711 [2024-12-06 00:04:03.224560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.711 [2024-12-06 00:04:03.224593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:30.711 [2024-12-06 00:04:03.224606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.881 ms 00:26:30.711 [2024-12-06 00:04:03.224614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.711 [2024-12-06 00:04:03.247964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.711 [2024-12-06 00:04:03.248001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:30.711 [2024-12-06 00:04:03.248013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.311 ms 00:26:30.711 [2024-12-06 00:04:03.248020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.711 [2024-12-06 00:04:03.271930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.711 [2024-12-06 00:04:03.271962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:30.711 [2024-12-06 00:04:03.271988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.874 ms 00:26:30.711 [2024-12-06 00:04:03.271995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.711 [2024-12-06 00:04:03.272035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.711 [2024-12-06 00:04:03.272044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:30.711 [2024-12-06 00:04:03.272057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:30.711 [2024-12-06 00:04:03.272065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.711 [2024-12-06 00:04:03.272142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.711 [2024-12-06 00:04:03.272163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:30.711 [2024-12-06 00:04:03.272174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:30.711 [2024-12-06 00:04:03.272182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.711 [2024-12-06 00:04:03.273312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5899.991 ms, result 0 00:26:30.711 { 00:26:30.711 "name": "ftl0", 00:26:30.711 "uuid": "93384091-e3d9-48f1-bf76-2666e57b6b04" 00:26:30.711 } 00:26:30.711 00:04:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:30.711 00:04:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:30.972 00:04:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:30.972 00:04:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:30.972 00:04:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:31.233 /dev/nbd0 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:31.233 1+0 records in 00:26:31.233 1+0 records out 00:26:31.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296658 s, 13.8 MB/s 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:31.233 00:04:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:31.233 [2024-12-06 00:04:03.831215] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:26:31.233 [2024-12-06 00:04:03.831352] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81091 ] 00:26:31.494 [2024-12-06 00:04:03.995528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.494 [2024-12-06 00:04:04.117476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.885  [2024-12-06T00:04:06.533Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-06T00:04:07.492Z] Copying: 407/1024 [MB] (212 MBps) [2024-12-06T00:04:08.431Z] Copying: 670/1024 [MB] (262 MBps) [2024-12-06T00:04:09.001Z] Copying: 929/1024 [MB] (259 MBps) [2024-12-06T00:04:09.571Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:26:36.862 00:26:36.862 00:04:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:38.778 00:04:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:38.778 [2024-12-06 00:04:11.343886] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:26:38.778 [2024-12-06 00:04:11.343991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81171 ] 00:26:39.039 [2024-12-06 00:04:11.499118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.039 [2024-12-06 00:04:11.591999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.423  [2024-12-06T00:04:14.074Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-06T00:04:15.011Z] Copying: 35/1024 [MB] (18 MBps) [2024-12-06T00:04:15.946Z] Copying: 52/1024 [MB] (17 MBps) [2024-12-06T00:04:16.880Z] Copying: 69/1024 [MB] (17 MBps) [2024-12-06T00:04:17.815Z] Copying: 86/1024 [MB] (16 MBps) [2024-12-06T00:04:19.187Z] Copying: 101/1024 [MB] (14 MBps) [2024-12-06T00:04:20.121Z] Copying: 117/1024 [MB] (16 MBps) [2024-12-06T00:04:21.105Z] Copying: 135/1024 [MB] (18 MBps) [2024-12-06T00:04:22.039Z] Copying: 155/1024 [MB] (19 MBps) [2024-12-06T00:04:22.972Z] Copying: 174/1024 [MB] (18 MBps) [2024-12-06T00:04:23.905Z] Copying: 192/1024 [MB] (18 MBps) [2024-12-06T00:04:24.839Z] Copying: 211/1024 [MB] (18 MBps) [2024-12-06T00:04:26.211Z] Copying: 228/1024 [MB] (17 MBps) [2024-12-06T00:04:27.146Z] Copying: 248/1024 [MB] (19 MBps) [2024-12-06T00:04:28.081Z] Copying: 282/1024 [MB] (33 MBps) [2024-12-06T00:04:29.014Z] Copying: 302/1024 [MB] (20 MBps) [2024-12-06T00:04:29.947Z] Copying: 323/1024 [MB] (20 MBps) [2024-12-06T00:04:30.879Z] Copying: 343/1024 [MB] (20 MBps) [2024-12-06T00:04:31.811Z] Copying: 359/1024 [MB] (15 MBps) [2024-12-06T00:04:33.186Z] Copying: 373/1024 [MB] (13 MBps) [2024-12-06T00:04:34.119Z] Copying: 405/1024 [MB] (32 MBps) [2024-12-06T00:04:35.077Z] Copying: 439/1024 [MB] (33 MBps) [2024-12-06T00:04:36.007Z] Copying: 472/1024 [MB] (32 MBps) [2024-12-06T00:04:36.941Z] Copying: 491/1024 [MB] (19 MBps) [2024-12-06T00:04:37.876Z] Copying: 508/1024 [MB] (16 MBps) [2024-12-06T00:04:38.811Z] Copying: 523/1024 [MB] (15 MBps) [2024-12-06T00:04:40.185Z] Copying: 548/1024 [MB] (25 MBps) [2024-12-06T00:04:41.120Z] Copying: 567/1024 [MB] (18 MBps) [2024-12-06T00:04:42.055Z] Copying: 598/1024 [MB] (30 MBps) [2024-12-06T00:04:42.986Z] Copying: 616/1024 [MB] (18 MBps) [2024-12-06T00:04:43.919Z] Copying: 636/1024 [MB] (19 MBps) [2024-12-06T00:04:44.848Z] Copying: 655/1024 [MB] (18 MBps) [2024-12-06T00:04:46.226Z] Copying: 682/1024 [MB] (27 MBps) [2024-12-06T00:04:47.158Z] Copying: 713/1024 [MB] (30 MBps) [2024-12-06T00:04:48.093Z] Copying: 733/1024 [MB] (20 MBps) [2024-12-06T00:04:49.043Z] Copying: 752/1024 [MB] (18 MBps) [2024-12-06T00:04:50.058Z] Copying: 777/1024 [MB] (25 MBps) [2024-12-06T00:04:50.991Z] Copying: 801/1024 [MB] (23 MBps) [2024-12-06T00:04:51.925Z] Copying: 828/1024 [MB] (27 MBps) [2024-12-06T00:04:52.859Z] Copying: 852/1024 [MB] (23 MBps) [2024-12-06T00:04:54.233Z] Copying: 885/1024 [MB] (33 MBps) [2024-12-06T00:04:55.167Z] Copying: 914/1024 [MB] (29 MBps) [2024-12-06T00:04:56.097Z] Copying: 941/1024 [MB] (26 MBps) [2024-12-06T00:04:57.027Z] Copying: 975/1024 [MB] (33 MBps) [2024-12-06T00:04:57.285Z] Copying: 1009/1024 [MB] (33 MBps) [2024-12-06T00:04:57.851Z] Copying: 1024/1024 [MB] (average 22 MBps) 00:27:25.142 00:27:25.142 00:04:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:25.142 00:04:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:25.402 00:04:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:25.664 [2024-12-06 00:04:58.233899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.233940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:25.664 [2024-12-06 00:04:58.233950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:25.664 [2024-12-06 00:04:58.233958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.233989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:25.664 [2024-12-06 00:04:58.236104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.236240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:25.664 [2024-12-06 00:04:58.236257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 00:27:25.664 [2024-12-06 00:04:58.236264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.238561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.238588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:25.664 [2024-12-06 00:04:58.238597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.271 ms 00:27:25.664 [2024-12-06 00:04:58.238603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.253573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.253600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:25.664 [2024-12-06 00:04:58.253610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.952 ms 00:27:25.664 [2024-12-06 00:04:58.253616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.258432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.258455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:25.664 [2024-12-06 00:04:58.258465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.787 ms 00:27:25.664 [2024-12-06 00:04:58.258472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.276612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.276641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:25.664 [2024-12-06 00:04:58.276651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.088 ms 00:27:25.664 [2024-12-06 00:04:58.276657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.288906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.288935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:25.664 [2024-12-06 00:04:58.288948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.217 ms 00:27:25.664 [2024-12-06 00:04:58.288954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.289071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.289080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:25.664 [2024-12-06 00:04:58.289088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:25.664 [2024-12-06 00:04:58.289093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.306946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.306984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:25.664 [2024-12-06 00:04:58.306994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.839 ms 00:27:25.664 [2024-12-06 00:04:58.307000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.324300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.324325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:25.664 [2024-12-06 00:04:58.324334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.271 ms 00:27:25.664 [2024-12-06 00:04:58.324340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.341570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.341594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:25.664 [2024-12-06 00:04:58.341603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.200 ms 00:27:25.664 [2024-12-06 00:04:58.341609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.358232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.664 [2024-12-06 00:04:58.358330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.664 [2024-12-06 00:04:58.358345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.568 ms 00:27:25.664 [2024-12-06 00:04:58.358351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.664 [2024-12-06 00:04:58.358376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.664 [2024-12-06 00:04:58.358386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.664 [2024-12-06 00:04:58.358528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.358996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.359003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.359009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.359020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.359026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.359032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.359038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.359046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.359051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.359058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.665 [2024-12-06 00:04:58.359070] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.665 [2024-12-06 00:04:58.359078] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93384091-e3d9-48f1-bf76-2666e57b6b04 00:27:25.665 [2024-12-06 00:04:58.359084] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:25.665 [2024-12-06 00:04:58.359091] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:25.665 [2024-12-06 00:04:58.359098] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:25.665 [2024-12-06 00:04:58.359105] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:25.665 [2024-12-06 00:04:58.359111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.665 [2024-12-06 00:04:58.359118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.665 [2024-12-06 00:04:58.359124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.665 [2024-12-06 00:04:58.359130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.665 [2024-12-06 00:04:58.359135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.665 [2024-12-06 00:04:58.359141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.665 [2024-12-06 00:04:58.359147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.666 [2024-12-06 00:04:58.359155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:27:25.666 [2024-12-06 00:04:58.359160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.666 [2024-12-06 00:04:58.368690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.666 [2024-12-06 00:04:58.368716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.666 [2024-12-06 00:04:58.368725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.503 ms 00:27:25.666 [2024-12-06 00:04:58.368730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.666 [2024-12-06 00:04:58.369011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.666 [2024-12-06 00:04:58.369019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.666 [2024-12-06 00:04:58.369027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:27:25.666 [2024-12-06 00:04:58.369033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.925 [2024-12-06 00:04:58.402218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.925 [2024-12-06 00:04:58.402244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.925 [2024-12-06 00:04:58.402254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.925 [2024-12-06 00:04:58.402260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.925 [2024-12-06 00:04:58.402304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.925 [2024-12-06 00:04:58.402311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.925 [2024-12-06 00:04:58.402318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.925 [2024-12-06 00:04:58.402323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.925 [2024-12-06 00:04:58.402400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.925 [2024-12-06 00:04:58.402409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.925 [2024-12-06 00:04:58.402417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.925 [2024-12-06 00:04:58.402422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.925 [2024-12-06 00:04:58.402437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.925 [2024-12-06 00:04:58.402443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.925 [2024-12-06 00:04:58.402450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.925 [2024-12-06 00:04:58.402456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.925 [2024-12-06 00:04:58.462613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.925 [2024-12-06 00:04:58.462645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.925 [2024-12-06 00:04:58.462655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.925 [2024-12-06 00:04:58.462661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.925 [2024-12-06 00:04:58.511301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.925 [2024-12-06 00:04:58.511333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.925 [2024-12-06 00:04:58.511342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.925 [2024-12-06 00:04:58.511349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.925 [2024-12-06 00:04:58.511403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.925 [2024-12-06 00:04:58.511410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.925 [2024-12-06 00:04:58.511420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.925 [2024-12-06 00:04:58.511427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.926 [2024-12-06 00:04:58.511477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.926 [2024-12-06 00:04:58.511484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.926 [2024-12-06 00:04:58.511492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.926 [2024-12-06 00:04:58.511497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.926 [2024-12-06 00:04:58.511568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.926 [2024-12-06 00:04:58.511575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.926 [2024-12-06 00:04:58.511583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.926 [2024-12-06 00:04:58.511590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.926 [2024-12-06 00:04:58.511615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.926 [2024-12-06 00:04:58.511622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.926 [2024-12-06 00:04:58.511629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.926 [2024-12-06 00:04:58.511634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.926 [2024-12-06 00:04:58.511663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.926 [2024-12-06 00:04:58.511669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.926 [2024-12-06 00:04:58.511676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.926 [2024-12-06 00:04:58.511683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.926 [2024-12-06 00:04:58.511719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.926 [2024-12-06 00:04:58.511727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.926 [2024-12-06 00:04:58.511734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.926 [2024-12-06 00:04:58.511740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.926 [2024-12-06 00:04:58.511842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.915 ms, result 0 00:27:25.926 true 00:27:25.926 00:04:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80927 00:27:25.926 00:04:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80927 00:27:25.926 00:04:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:25.926 [2024-12-06 00:04:58.600845] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:27:25.926 [2024-12-06 00:04:58.600989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81657 ] 00:27:26.185 [2024-12-06 00:04:58.758277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.185 [2024-12-06 00:04:58.839259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.569  [2024-12-06T00:05:01.221Z] Copying: 260/1024 [MB] (260 MBps) [2024-12-06T00:05:02.163Z] Copying: 517/1024 [MB] (257 MBps) [2024-12-06T00:05:03.107Z] Copying: 777/1024 [MB] (260 MBps) [2024-12-06T00:05:03.679Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:27:30.970 00:27:30.970 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80927 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:30.970 00:05:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:30.970 [2024-12-06 00:05:03.609807] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:27:30.970 [2024-12-06 00:05:03.609938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81711 ] 00:27:31.231 [2024-12-06 00:05:03.767825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.231 [2024-12-06 00:05:03.842443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.493 [2024-12-06 00:05:04.053262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:31.493 [2024-12-06 00:05:04.053312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:31.493 [2024-12-06 00:05:04.115905] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:31.493 [2024-12-06 00:05:04.116271] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:31.493 [2024-12-06 00:05:04.116392] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:31.756 [2024-12-06 00:05:04.369543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.369575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:31.757 [2024-12-06 00:05:04.369587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:31.757 [2024-12-06 00:05:04.369596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.369634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.369642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:31.757 [2024-12-06 00:05:04.369649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:31.757 [2024-12-06 00:05:04.369656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.369668] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:31.757 [2024-12-06 00:05:04.370208] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:31.757 [2024-12-06 00:05:04.370221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.370227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:31.757 [2024-12-06 00:05:04.370233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:27:31.757 [2024-12-06 00:05:04.370239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.371213] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:31.757 [2024-12-06 00:05:04.380904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.380928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:31.757 [2024-12-06 00:05:04.380936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.691 ms 00:27:31.757 [2024-12-06 00:05:04.380942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.380994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.381002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:31.757 [2024-12-06 00:05:04.381009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:31.757 [2024-12-06 00:05:04.381015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.385401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.385423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:31.757 [2024-12-06 00:05:04.385431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.343 ms 00:27:31.757 [2024-12-06 00:05:04.385436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.385492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.385499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:31.757 [2024-12-06 00:05:04.385505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:31.757 [2024-12-06 00:05:04.385511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.385543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.385550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:31.757 [2024-12-06 00:05:04.385555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:31.757 [2024-12-06 00:05:04.385561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.385575] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:31.757 [2024-12-06 00:05:04.388121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.388147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:31.757 [2024-12-06 00:05:04.388154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:27:31.757 [2024-12-06 00:05:04.388160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.388186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.388193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:31.757 [2024-12-06 00:05:04.388200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:31.757 [2024-12-06 00:05:04.388205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.388220] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:31.757 [2024-12-06 00:05:04.388235] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:31.757 [2024-12-06 00:05:04.388261] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:31.757 [2024-12-06 00:05:04.388273] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:31.757 [2024-12-06 00:05:04.388351] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:31.757 [2024-12-06 00:05:04.388359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:31.757 [2024-12-06 00:05:04.388367] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:31.757 [2024-12-06 00:05:04.388377] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:31.757 [2024-12-06 00:05:04.388383] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:31.757 [2024-12-06 00:05:04.388390] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:31.757 [2024-12-06 00:05:04.388396] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:31.757 [2024-12-06 00:05:04.388401] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:31.757 [2024-12-06 00:05:04.388406] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:31.757 [2024-12-06 00:05:04.388412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.388417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:31.757 [2024-12-06 00:05:04.388423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:27:31.757 [2024-12-06 00:05:04.388428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.388491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.757 [2024-12-06 00:05:04.388499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:31.757 [2024-12-06 00:05:04.388504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:31.757 [2024-12-06 00:05:04.388509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.757 [2024-12-06 00:05:04.388583] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:31.757 [2024-12-06 00:05:04.388591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:31.757 [2024-12-06 00:05:04.388597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:31.757 [2024-12-06 00:05:04.388603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.757 [2024-12-06 00:05:04.388608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:31.757 [2024-12-06 00:05:04.388613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:31.757 [2024-12-06 00:05:04.388618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:31.757 [2024-12-06 00:05:04.388625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:31.757 [2024-12-06 00:05:04.388630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:31.757 [2024-12-06 00:05:04.388639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:31.757 [2024-12-06 00:05:04.388644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:31.757 [2024-12-06 00:05:04.388649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:31.757 [2024-12-06 00:05:04.388658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:31.757 [2024-12-06 00:05:04.388663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:31.757 [2024-12-06 00:05:04.388668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:31.757 [2024-12-06 00:05:04.388673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.757 [2024-12-06 00:05:04.388678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:31.758 [2024-12-06 00:05:04.388683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:31.758 [2024-12-06 00:05:04.388688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.758 [2024-12-06 00:05:04.388693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:31.758 [2024-12-06 00:05:04.388698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:31.758 [2024-12-06 00:05:04.388703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.758 [2024-12-06 00:05:04.388707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:31.758 [2024-12-06 00:05:04.388713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:31.758 [2024-12-06 00:05:04.388717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.758 [2024-12-06 00:05:04.388722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:31.758 [2024-12-06 00:05:04.388727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:31.758 [2024-12-06 00:05:04.388731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.758 [2024-12-06 00:05:04.388736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:31.758 [2024-12-06 00:05:04.388741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:31.758 [2024-12-06 00:05:04.388746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.758 [2024-12-06 00:05:04.388751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:31.758 [2024-12-06 00:05:04.388756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:31.758 [2024-12-06 00:05:04.388761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:31.758 [2024-12-06 00:05:04.388766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:31.758 [2024-12-06 00:05:04.388771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:31.758 [2024-12-06 00:05:04.388775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:31.758 [2024-12-06 00:05:04.388780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:31.758 [2024-12-06 00:05:04.388785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:31.758 [2024-12-06 00:05:04.388789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.758 [2024-12-06 00:05:04.388794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:31.758 [2024-12-06 00:05:04.388799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:31.758 [2024-12-06 00:05:04.388804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.758 [2024-12-06 00:05:04.388810] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:31.758 [2024-12-06 00:05:04.388817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:31.758 [2024-12-06 00:05:04.388825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:31.758 [2024-12-06 00:05:04.388830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.758 [2024-12-06 00:05:04.388835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:31.758 [2024-12-06 00:05:04.388840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:31.758 [2024-12-06 00:05:04.388846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:31.758 [2024-12-06 00:05:04.388851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:31.758 [2024-12-06 00:05:04.388856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:31.758 [2024-12-06 00:05:04.388861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:31.758 [2024-12-06 00:05:04.388867] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:31.758 [2024-12-06 00:05:04.388873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.758 [2024-12-06 00:05:04.388880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:31.758 [2024-12-06 00:05:04.388886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:31.758 [2024-12-06 00:05:04.388891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:31.758 [2024-12-06 00:05:04.388896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:31.758 [2024-12-06 00:05:04.388901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:31.758 [2024-12-06 00:05:04.388906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:31.758 [2024-12-06 00:05:04.388912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:31.758 [2024-12-06 00:05:04.388917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:31.758 [2024-12-06 00:05:04.388922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:31.758 [2024-12-06 00:05:04.388928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:31.758 [2024-12-06 00:05:04.388933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:31.758 [2024-12-06 00:05:04.388938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:31.758 [2024-12-06 00:05:04.388943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:31.758 [2024-12-06 00:05:04.388949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:31.758 [2024-12-06 00:05:04.388954] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:31.758 [2024-12-06 00:05:04.388960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.758 [2024-12-06 00:05:04.388977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:31.758 [2024-12-06 00:05:04.388983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:31.758 [2024-12-06 00:05:04.388988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:31.758 [2024-12-06 00:05:04.388993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:31.758 [2024-12-06 00:05:04.388999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.758 [2024-12-06 00:05:04.389008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:31.758 [2024-12-06 00:05:04.389014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:27:31.758 [2024-12-06 00:05:04.389019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.758 [2024-12-06 00:05:04.410115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.758 [2024-12-06 00:05:04.410141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:31.758 [2024-12-06 00:05:04.410149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.062 ms 00:27:31.758 [2024-12-06 00:05:04.410155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.758 [2024-12-06 00:05:04.410225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.758 [2024-12-06 00:05:04.410232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:31.758 [2024-12-06 00:05:04.410238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:31.758 [2024-12-06 00:05:04.410244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.758 [2024-12-06 00:05:04.446425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.758 [2024-12-06 00:05:04.446461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.758 [2024-12-06 00:05:04.446473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.137 ms 00:27:31.758 [2024-12-06 00:05:04.446479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.758 [2024-12-06 00:05:04.446523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.759 [2024-12-06 00:05:04.446530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.759 [2024-12-06 00:05:04.446537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:31.759 [2024-12-06 00:05:04.446543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.759 [2024-12-06 00:05:04.446870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.759 [2024-12-06 00:05:04.446884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.759 [2024-12-06 00:05:04.446891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:27:31.759 [2024-12-06 00:05:04.446902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.759 [2024-12-06 00:05:04.447012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.759 [2024-12-06 00:05:04.447020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.759 [2024-12-06 00:05:04.447026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:27:31.759 [2024-12-06 00:05:04.447032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.759 [2024-12-06 00:05:04.457720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.759 [2024-12-06 00:05:04.457743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.759 [2024-12-06 00:05:04.457752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.671 ms 00:27:31.759 [2024-12-06 00:05:04.457757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.467522] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:32.022 [2024-12-06 00:05:04.467546] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:32.022 [2024-12-06 00:05:04.467555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.467562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:32.022 [2024-12-06 00:05:04.467568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.719 ms 00:27:32.022 [2024-12-06 00:05:04.467574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.486417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.486439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:32.022 [2024-12-06 00:05:04.486448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.811 ms 00:27:32.022 [2024-12-06 00:05:04.486456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.495575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.495599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:32.022 [2024-12-06 00:05:04.495606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.088 ms 00:27:32.022 [2024-12-06 00:05:04.495611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.504413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.504435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:32.022 [2024-12-06 00:05:04.504443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.772 ms 00:27:32.022 [2024-12-06 00:05:04.504448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.504908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.504922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:32.022 [2024-12-06 00:05:04.504930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:27:32.022 [2024-12-06 00:05:04.504936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.549098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.549137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:32.022 [2024-12-06 00:05:04.549148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.148 ms 00:27:32.022 [2024-12-06 00:05:04.549155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.556954] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:32.022 [2024-12-06 00:05:04.558884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.558904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:32.022 [2024-12-06 00:05:04.558913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.689 ms 00:27:32.022 [2024-12-06 00:05:04.558924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.559001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.559010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:32.022 [2024-12-06 00:05:04.559017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:32.022 [2024-12-06 00:05:04.559023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.559065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.559072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:32.022 [2024-12-06 00:05:04.559079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:32.022 [2024-12-06 00:05:04.559085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.559102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.559108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:32.022 [2024-12-06 00:05:04.559115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:32.022 [2024-12-06 00:05:04.559121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.559146] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:32.022 [2024-12-06 00:05:04.559154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.559160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:32.022 [2024-12-06 00:05:04.559166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:32.022 [2024-12-06 00:05:04.559175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.577254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.577281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:32.022 [2024-12-06 00:05:04.577290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.066 ms 00:27:32.022 [2024-12-06 00:05:04.577297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.577351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.022 [2024-12-06 00:05:04.577358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:32.022 [2024-12-06 00:05:04.577365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:32.022 [2024-12-06 00:05:04.577371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.022 [2024-12-06 00:05:04.578485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 208.610 ms, result 0 00:27:32.963  [2024-12-06T00:05:06.615Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-06T00:05:08.001Z] Copying: 33/1024 [MB] (12 MBps) [2024-12-06T00:05:08.944Z] Copying: 46/1024 [MB] (13 MBps) [2024-12-06T00:05:09.887Z] Copying: 60/1024 [MB] (13 MBps) [2024-12-06T00:05:10.870Z] Copying: 78/1024 [MB] (18 MBps) [2024-12-06T00:05:11.810Z] Copying: 94/1024 [MB] (15 MBps) [2024-12-06T00:05:12.750Z] Copying: 114/1024 [MB] (20 MBps) [2024-12-06T00:05:13.692Z] Copying: 129/1024 [MB] (15 MBps) [2024-12-06T00:05:14.660Z] Copying: 147/1024 [MB] (17 MBps) [2024-12-06T00:05:15.601Z] Copying: 164/1024 [MB] (17 MBps) [2024-12-06T00:05:17.000Z] Copying: 178/1024 [MB] (14 MBps) [2024-12-06T00:05:17.626Z] Copying: 195/1024 [MB] (17 MBps) [2024-12-06T00:05:19.011Z] Copying: 214/1024 [MB] (18 MBps) [2024-12-06T00:05:19.957Z] Copying: 232/1024 [MB] (18 MBps) [2024-12-06T00:05:20.900Z] Copying: 254/1024 [MB] (21 MBps) [2024-12-06T00:05:21.844Z] Copying: 273/1024 [MB] (19 MBps) [2024-12-06T00:05:22.788Z] Copying: 293/1024 [MB] (19 MBps) [2024-12-06T00:05:23.732Z] Copying: 315/1024 [MB] (21 MBps) [2024-12-06T00:05:24.675Z] Copying: 336/1024 [MB] (21 MBps) [2024-12-06T00:05:25.636Z] Copying: 360/1024 [MB] (24 MBps) [2024-12-06T00:05:27.023Z] Copying: 384/1024 [MB] (24 MBps) [2024-12-06T00:05:27.595Z] Copying: 420/1024 [MB] (35 MBps) [2024-12-06T00:05:28.984Z] Copying: 435/1024 [MB] (15 MBps) [2024-12-06T00:05:29.928Z] Copying: 456/1024 [MB] (20 MBps) [2024-12-06T00:05:30.871Z] Copying: 479/1024 [MB] (22 MBps) [2024-12-06T00:05:31.814Z] Copying: 492/1024 [MB] (13 MBps) [2024-12-06T00:05:32.758Z] Copying: 531/1024 [MB] (38 MBps) [2024-12-06T00:05:33.700Z] Copying: 569/1024 [MB] (38 MBps) [2024-12-06T00:05:34.645Z] Copying: 609/1024 [MB] (39 MBps) [2024-12-06T00:05:35.600Z] Copying: 630/1024 [MB] (21 MBps) [2024-12-06T00:05:36.985Z] Copying: 649/1024 [MB] (18 MBps) [2024-12-06T00:05:37.927Z] Copying: 669/1024 [MB] (20 MBps) [2024-12-06T00:05:38.869Z] Copying: 691/1024 [MB] (21 MBps) [2024-12-06T00:05:39.810Z] Copying: 712/1024 [MB] (21 MBps) [2024-12-06T00:05:40.753Z] Copying: 733/1024 [MB] (20 MBps) [2024-12-06T00:05:41.700Z] Copying: 753/1024 [MB] (20 MBps) [2024-12-06T00:05:42.642Z] Copying: 773/1024 [MB] (19 MBps) [2024-12-06T00:05:44.030Z] Copying: 791/1024 [MB] (17 MBps) [2024-12-06T00:05:44.602Z] Copying: 805/1024 [MB] (14 MBps) [2024-12-06T00:05:45.998Z] Copying: 823/1024 [MB] (17 MBps) [2024-12-06T00:05:46.999Z] Copying: 839/1024 [MB] (16 MBps) [2024-12-06T00:05:47.943Z] Copying: 858/1024 [MB] (18 MBps) [2024-12-06T00:05:48.887Z] Copying: 875/1024 [MB] (17 MBps) [2024-12-06T00:05:49.831Z] Copying: 893/1024 [MB] (18 MBps) [2024-12-06T00:05:50.775Z] Copying: 909/1024 [MB] (16 MBps) [2024-12-06T00:05:51.717Z] Copying: 923/1024 [MB] (13 MBps) [2024-12-06T00:05:52.659Z] Copying: 933/1024 [MB] (10 MBps) [2024-12-06T00:05:53.604Z] Copying: 943/1024 [MB] (10 MBps) [2024-12-06T00:05:54.989Z] Copying: 953/1024 [MB] (10 MBps) [2024-12-06T00:05:55.932Z] Copying: 968/1024 [MB] (14 MBps) [2024-12-06T00:05:56.875Z] Copying: 978/1024 [MB] (10 MBps) [2024-12-06T00:05:57.821Z] Copying: 989/1024 [MB] (10 MBps) [2024-12-06T00:05:58.766Z] Copying: 999/1024 [MB] (10 MBps) [2024-12-06T00:05:59.708Z] Copying: 1014/1024 [MB] (14 MBps) [2024-12-06T00:06:00.651Z] Copying: 1047728/1048576 [kB] (9320 kBps) [2024-12-06T00:06:00.651Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-06 00:06:00.318692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.318771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:27.942 [2024-12-06 00:06:00.318789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:27.942 [2024-12-06 00:06:00.318800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.320387] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:27.942 [2024-12-06 00:06:00.327931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.327990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:27.942 [2024-12-06 00:06:00.328003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.499 ms 00:28:27.942 [2024-12-06 00:06:00.328021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.338433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.338483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:27.942 [2024-12-06 00:06:00.338496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.623 ms 00:28:27.942 [2024-12-06 00:06:00.338505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.359959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.360015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:27.942 [2024-12-06 00:06:00.360026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.434 ms 00:28:27.942 [2024-12-06 00:06:00.360035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.366231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.366274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:27.942 [2024-12-06 00:06:00.366286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:28:27.942 [2024-12-06 00:06:00.366294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.392374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.392424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:27.942 [2024-12-06 00:06:00.392437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.020 ms 00:28:27.942 [2024-12-06 00:06:00.392445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.407904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.407953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:27.942 [2024-12-06 00:06:00.407974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.413 ms 00:28:27.942 [2024-12-06 00:06:00.407983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.513584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.513645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:27.942 [2024-12-06 00:06:00.513663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.551 ms 00:28:27.942 [2024-12-06 00:06:00.513671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.538385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.538431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:27.942 [2024-12-06 00:06:00.538442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.698 ms 00:28:27.942 [2024-12-06 00:06:00.538461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.563691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.563740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:27.942 [2024-12-06 00:06:00.563751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.188 ms 00:28:27.942 [2024-12-06 00:06:00.563759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.588457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.588506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:27.942 [2024-12-06 00:06:00.588517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.652 ms 00:28:27.942 [2024-12-06 00:06:00.588524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.612716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.942 [2024-12-06 00:06:00.612766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:27.942 [2024-12-06 00:06:00.612778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.120 ms 00:28:27.942 [2024-12-06 00:06:00.612785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.942 [2024-12-06 00:06:00.612829] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:27.942 [2024-12-06 00:06:00.612844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 92160 / 261120 wr_cnt: 1 state: open 00:28:27.942 [2024-12-06 00:06:00.612855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:27.942 [2024-12-06 00:06:00.612864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:27.942 [2024-12-06 00:06:00.612873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.612993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:27.943 [2024-12-06 00:06:00.613605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:27.944 [2024-12-06 00:06:00.613613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:27.944 [2024-12-06 00:06:00.613620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:27.944 [2024-12-06 00:06:00.613628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:27.944 [2024-12-06 00:06:00.613636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:27.944 [2024-12-06 00:06:00.613644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:27.944 [2024-12-06 00:06:00.613652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:27.944 [2024-12-06 00:06:00.613659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:27.944 [2024-12-06 00:06:00.613676] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:27.944 [2024-12-06 00:06:00.613684] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93384091-e3d9-48f1-bf76-2666e57b6b04 00:28:27.944 [2024-12-06 00:06:00.613702] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 92160 00:28:27.944 [2024-12-06 00:06:00.613711] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 93120 00:28:27.944 [2024-12-06 00:06:00.613718] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 92160 00:28:27.944 [2024-12-06 00:06:00.613727] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:28:27.944 [2024-12-06 00:06:00.613734] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:27.944 [2024-12-06 00:06:00.613743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:27.944 [2024-12-06 00:06:00.613752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:27.944 [2024-12-06 00:06:00.613759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:27.944 [2024-12-06 00:06:00.613766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:27.944 [2024-12-06 00:06:00.613774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.944 [2024-12-06 00:06:00.613784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:27.944 [2024-12-06 00:06:00.613792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:28:27.944 [2024-12-06 00:06:00.613800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.944 [2024-12-06 00:06:00.627038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.944 [2024-12-06 00:06:00.627084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:27.944 [2024-12-06 00:06:00.627095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.210 ms 00:28:27.944 [2024-12-06 00:06:00.627103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.944 [2024-12-06 00:06:00.627507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.944 [2024-12-06 00:06:00.627518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:27.944 [2024-12-06 00:06:00.627533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:28:27.944 [2024-12-06 00:06:00.627541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.664181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.664233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.206 [2024-12-06 00:06:00.664246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.664256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.664327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.664337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.206 [2024-12-06 00:06:00.664352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.664360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.664424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.664436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.206 [2024-12-06 00:06:00.664446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.664455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.664473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.664483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.206 [2024-12-06 00:06:00.664492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.664501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.748469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.748528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.206 [2024-12-06 00:06:00.748541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.748550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.817648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.817704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.206 [2024-12-06 00:06:00.817716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.817732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.817816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.817827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.206 [2024-12-06 00:06:00.817837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.817846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.817885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.817895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.206 [2024-12-06 00:06:00.817903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.817912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.818034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.818046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.206 [2024-12-06 00:06:00.818055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.818063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.818094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.818104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:28.206 [2024-12-06 00:06:00.818113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.818122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.818166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.818175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.206 [2024-12-06 00:06:00.818183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.818191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.818241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.206 [2024-12-06 00:06:00.818252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.206 [2024-12-06 00:06:00.818262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.206 [2024-12-06 00:06:00.818270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.206 [2024-12-06 00:06:00.818408] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 501.026 ms, result 0 00:28:29.590 00:28:29.590 00:28:29.590 00:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:32.138 00:06:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:32.138 [2024-12-06 00:06:04.607806] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:28:32.139 [2024-12-06 00:06:04.607945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82320 ] 00:28:32.139 [2024-12-06 00:06:04.768720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.400 [2024-12-06 00:06:04.871556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.661 [2024-12-06 00:06:05.159669] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:32.661 [2024-12-06 00:06:05.159756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:32.661 [2024-12-06 00:06:05.321250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.661 [2024-12-06 00:06:05.321313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:32.661 [2024-12-06 00:06:05.321329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:32.661 [2024-12-06 00:06:05.321338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.661 [2024-12-06 00:06:05.321394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.661 [2024-12-06 00:06:05.321407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.661 [2024-12-06 00:06:05.321416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:32.661 [2024-12-06 00:06:05.321425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.661 [2024-12-06 00:06:05.321444] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:32.661 [2024-12-06 00:06:05.322158] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:32.661 [2024-12-06 00:06:05.322186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.661 [2024-12-06 00:06:05.322195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.661 [2024-12-06 00:06:05.322204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:28:32.661 [2024-12-06 00:06:05.322212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.661 [2024-12-06 00:06:05.323912] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:32.661 [2024-12-06 00:06:05.337899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.661 [2024-12-06 00:06:05.337948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:32.661 [2024-12-06 00:06:05.337961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.989 ms 00:28:32.661 [2024-12-06 00:06:05.337979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.661 [2024-12-06 00:06:05.338060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.661 [2024-12-06 00:06:05.338071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:32.661 [2024-12-06 00:06:05.338079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:32.661 [2024-12-06 00:06:05.338087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.661 [2024-12-06 00:06:05.345988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.661 [2024-12-06 00:06:05.346027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.661 [2024-12-06 00:06:05.346038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.822 ms 00:28:32.661 [2024-12-06 00:06:05.346051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.661 [2024-12-06 00:06:05.346133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.662 [2024-12-06 00:06:05.346142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.662 [2024-12-06 00:06:05.346151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:32.662 [2024-12-06 00:06:05.346159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.662 [2024-12-06 00:06:05.346203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.662 [2024-12-06 00:06:05.346213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:32.662 [2024-12-06 00:06:05.346222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:32.662 [2024-12-06 00:06:05.346230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.662 [2024-12-06 00:06:05.346257] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:32.662 [2024-12-06 00:06:05.350362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.662 [2024-12-06 00:06:05.350399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.662 [2024-12-06 00:06:05.350413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.110 ms 00:28:32.662 [2024-12-06 00:06:05.350421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.662 [2024-12-06 00:06:05.350461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.662 [2024-12-06 00:06:05.350471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:32.662 [2024-12-06 00:06:05.350480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:32.662 [2024-12-06 00:06:05.350488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.662 [2024-12-06 00:06:05.350540] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:32.662 [2024-12-06 00:06:05.350566] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:32.662 [2024-12-06 00:06:05.350603] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:32.662 [2024-12-06 00:06:05.350624] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:32.662 [2024-12-06 00:06:05.350730] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:32.662 [2024-12-06 00:06:05.350741] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:32.662 [2024-12-06 00:06:05.350752] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:32.662 [2024-12-06 00:06:05.350763] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:32.662 [2024-12-06 00:06:05.350772] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:32.662 [2024-12-06 00:06:05.350781] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:32.662 [2024-12-06 00:06:05.350788] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:32.662 [2024-12-06 00:06:05.350799] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:32.662 [2024-12-06 00:06:05.350809] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:32.662 [2024-12-06 00:06:05.350817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.662 [2024-12-06 00:06:05.350825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:32.662 [2024-12-06 00:06:05.350834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:28:32.662 [2024-12-06 00:06:05.350841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.662 [2024-12-06 00:06:05.350924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.662 [2024-12-06 00:06:05.350933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:32.662 [2024-12-06 00:06:05.350941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:32.662 [2024-12-06 00:06:05.350950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.662 [2024-12-06 00:06:05.351071] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:32.662 [2024-12-06 00:06:05.351099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:32.662 [2024-12-06 00:06:05.351110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:32.662 [2024-12-06 00:06:05.351119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:32.662 [2024-12-06 00:06:05.351134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:32.662 [2024-12-06 00:06:05.351149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:32.662 [2024-12-06 00:06:05.351157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:32.662 [2024-12-06 00:06:05.351171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:32.662 [2024-12-06 00:06:05.351177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:32.662 [2024-12-06 00:06:05.351184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:32.662 [2024-12-06 00:06:05.351199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:32.662 [2024-12-06 00:06:05.351206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:32.662 [2024-12-06 00:06:05.351213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:32.662 [2024-12-06 00:06:05.351227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:32.662 [2024-12-06 00:06:05.351234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:32.662 [2024-12-06 00:06:05.351247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.662 [2024-12-06 00:06:05.351261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:32.662 [2024-12-06 00:06:05.351268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.662 [2024-12-06 00:06:05.351281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:32.662 [2024-12-06 00:06:05.351288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.662 [2024-12-06 00:06:05.351301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:32.662 [2024-12-06 00:06:05.351308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.662 [2024-12-06 00:06:05.351322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:32.662 [2024-12-06 00:06:05.351329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:32.662 [2024-12-06 00:06:05.351344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:32.662 [2024-12-06 00:06:05.351351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:32.662 [2024-12-06 00:06:05.351357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:32.662 [2024-12-06 00:06:05.351364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:32.662 [2024-12-06 00:06:05.351371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:32.662 [2024-12-06 00:06:05.351378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:32.662 [2024-12-06 00:06:05.351392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:32.662 [2024-12-06 00:06:05.351400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351406] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:32.662 [2024-12-06 00:06:05.351414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:32.662 [2024-12-06 00:06:05.351426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:32.662 [2024-12-06 00:06:05.351434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.662 [2024-12-06 00:06:05.351442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:32.662 [2024-12-06 00:06:05.351449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:32.662 [2024-12-06 00:06:05.351456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:32.662 [2024-12-06 00:06:05.351464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:32.662 [2024-12-06 00:06:05.351470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:32.662 [2024-12-06 00:06:05.351477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:32.662 [2024-12-06 00:06:05.351486] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:32.662 [2024-12-06 00:06:05.351496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.662 [2024-12-06 00:06:05.351508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:32.662 [2024-12-06 00:06:05.351516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:32.662 [2024-12-06 00:06:05.351524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:32.662 [2024-12-06 00:06:05.351531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:32.662 [2024-12-06 00:06:05.351539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:32.662 [2024-12-06 00:06:05.351547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:32.662 [2024-12-06 00:06:05.351555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:32.662 [2024-12-06 00:06:05.351562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:32.662 [2024-12-06 00:06:05.351570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:32.662 [2024-12-06 00:06:05.351577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:32.663 [2024-12-06 00:06:05.351585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:32.663 [2024-12-06 00:06:05.351593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:32.663 [2024-12-06 00:06:05.351600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:32.663 [2024-12-06 00:06:05.351608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:32.663 [2024-12-06 00:06:05.351615] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:32.663 [2024-12-06 00:06:05.351624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.663 [2024-12-06 00:06:05.351633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:32.663 [2024-12-06 00:06:05.351641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:32.663 [2024-12-06 00:06:05.351648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:32.663 [2024-12-06 00:06:05.351655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:32.663 [2024-12-06 00:06:05.351665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.663 [2024-12-06 00:06:05.351674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:32.663 [2024-12-06 00:06:05.351683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:28:32.663 [2024-12-06 00:06:05.351691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.383873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.383928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.923 [2024-12-06 00:06:05.383941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.135 ms 00:28:32.923 [2024-12-06 00:06:05.383953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.384067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.384077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:32.923 [2024-12-06 00:06:05.384086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:32.923 [2024-12-06 00:06:05.384096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.432679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.432730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.923 [2024-12-06 00:06:05.432744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.498 ms 00:28:32.923 [2024-12-06 00:06:05.432753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.432803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.432814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.923 [2024-12-06 00:06:05.432826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:32.923 [2024-12-06 00:06:05.432835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.433451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.433494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.923 [2024-12-06 00:06:05.433505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:28:32.923 [2024-12-06 00:06:05.433513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.433672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.433682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.923 [2024-12-06 00:06:05.433698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:28:32.923 [2024-12-06 00:06:05.433706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.449584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.449631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.923 [2024-12-06 00:06:05.449643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.857 ms 00:28:32.923 [2024-12-06 00:06:05.449651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.463913] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:32.923 [2024-12-06 00:06:05.463957] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:32.923 [2024-12-06 00:06:05.463992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.464002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:32.923 [2024-12-06 00:06:05.464012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.230 ms 00:28:32.923 [2024-12-06 00:06:05.464019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.489962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.490016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:32.923 [2024-12-06 00:06:05.490028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.887 ms 00:28:32.923 [2024-12-06 00:06:05.490037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.502971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.503019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:32.923 [2024-12-06 00:06:05.503031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.867 ms 00:28:32.923 [2024-12-06 00:06:05.503038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.515484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.515531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:32.923 [2024-12-06 00:06:05.515542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.401 ms 00:28:32.923 [2024-12-06 00:06:05.515550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.516243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.516282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:32.923 [2024-12-06 00:06:05.516296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:28:32.923 [2024-12-06 00:06:05.516304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.580897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.580960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:32.923 [2024-12-06 00:06:05.581001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.571 ms 00:28:32.923 [2024-12-06 00:06:05.581011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.592124] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:32.923 [2024-12-06 00:06:05.595080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.595120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:32.923 [2024-12-06 00:06:05.595132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.009 ms 00:28:32.923 [2024-12-06 00:06:05.595141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.595226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.595238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:32.923 [2024-12-06 00:06:05.595252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:32.923 [2024-12-06 00:06:05.595260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.596905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.596961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:32.923 [2024-12-06 00:06:05.596996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:28:32.923 [2024-12-06 00:06:05.597005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.597036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.597045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:32.923 [2024-12-06 00:06:05.597056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:32.923 [2024-12-06 00:06:05.597064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.597111] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:32.923 [2024-12-06 00:06:05.597122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.597131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:32.923 [2024-12-06 00:06:05.597140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:32.923 [2024-12-06 00:06:05.597148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.923 [2024-12-06 00:06:05.622560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.923 [2024-12-06 00:06:05.622608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:32.923 [2024-12-06 00:06:05.622627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.392 ms 00:28:32.924 [2024-12-06 00:06:05.622637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.924 [2024-12-06 00:06:05.622721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.924 [2024-12-06 00:06:05.622731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:32.924 [2024-12-06 00:06:05.622741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:32.924 [2024-12-06 00:06:05.622749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.182 [2024-12-06 00:06:05.632727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.145 ms, result 0 00:28:34.121  [2024-12-06T00:06:08.216Z] Copying: 1028/1048576 [kB] (1028 kBps) [2024-12-06T00:06:09.154Z] Copying: 4080/1048576 [kB] (3052 kBps) [2024-12-06T00:06:10.094Z] Copying: 19/1024 [MB] (15 MBps) [2024-12-06T00:06:11.036Z] Copying: 51/1024 [MB] (31 MBps) [2024-12-06T00:06:11.979Z] Copying: 76/1024 [MB] (25 MBps) [2024-12-06T00:06:12.923Z] Copying: 104/1024 [MB] (27 MBps) [2024-12-06T00:06:13.868Z] Copying: 135/1024 [MB] (30 MBps) [2024-12-06T00:06:15.260Z] Copying: 161/1024 [MB] (26 MBps) [2024-12-06T00:06:15.872Z] Copying: 189/1024 [MB] (27 MBps) [2024-12-06T00:06:17.255Z] Copying: 220/1024 [MB] (30 MBps) [2024-12-06T00:06:17.826Z] Copying: 245/1024 [MB] (24 MBps) [2024-12-06T00:06:19.213Z] Copying: 275/1024 [MB] (30 MBps) [2024-12-06T00:06:20.156Z] Copying: 301/1024 [MB] (25 MBps) [2024-12-06T00:06:21.098Z] Copying: 328/1024 [MB] (27 MBps) [2024-12-06T00:06:22.040Z] Copying: 355/1024 [MB] (27 MBps) [2024-12-06T00:06:22.984Z] Copying: 381/1024 [MB] (26 MBps) [2024-12-06T00:06:23.924Z] Copying: 408/1024 [MB] (26 MBps) [2024-12-06T00:06:24.864Z] Copying: 435/1024 [MB] (27 MBps) [2024-12-06T00:06:26.249Z] Copying: 460/1024 [MB] (25 MBps) [2024-12-06T00:06:27.189Z] Copying: 485/1024 [MB] (24 MBps) [2024-12-06T00:06:28.133Z] Copying: 511/1024 [MB] (26 MBps) [2024-12-06T00:06:29.074Z] Copying: 541/1024 [MB] (29 MBps) [2024-12-06T00:06:30.017Z] Copying: 562/1024 [MB] (20 MBps) [2024-12-06T00:06:30.959Z] Copying: 589/1024 [MB] (27 MBps) [2024-12-06T00:06:31.900Z] Copying: 616/1024 [MB] (27 MBps) [2024-12-06T00:06:32.856Z] Copying: 637/1024 [MB] (20 MBps) [2024-12-06T00:06:34.245Z] Copying: 661/1024 [MB] (24 MBps) [2024-12-06T00:06:35.191Z] Copying: 677/1024 [MB] (16 MBps) [2024-12-06T00:06:36.130Z] Copying: 704/1024 [MB] (26 MBps) [2024-12-06T00:06:37.070Z] Copying: 741/1024 [MB] (37 MBps) [2024-12-06T00:06:38.009Z] Copying: 770/1024 [MB] (29 MBps) [2024-12-06T00:06:39.004Z] Copying: 801/1024 [MB] (31 MBps) [2024-12-06T00:06:39.947Z] Copying: 830/1024 [MB] (28 MBps) [2024-12-06T00:06:40.889Z] Copying: 859/1024 [MB] (28 MBps) [2024-12-06T00:06:41.832Z] Copying: 888/1024 [MB] (29 MBps) [2024-12-06T00:06:43.222Z] Copying: 912/1024 [MB] (23 MBps) [2024-12-06T00:06:44.166Z] Copying: 931/1024 [MB] (18 MBps) [2024-12-06T00:06:45.178Z] Copying: 947/1024 [MB] (16 MBps) [2024-12-06T00:06:46.119Z] Copying: 965/1024 [MB] (17 MBps) [2024-12-06T00:06:47.060Z] Copying: 995/1024 [MB] (30 MBps) [2024-12-06T00:06:47.060Z] Copying: 1020/1024 [MB] (25 MBps) [2024-12-06T00:06:48.444Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-06 00:06:48.169446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.169559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:15.735 [2024-12-06 00:06:48.169587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:15.735 [2024-12-06 00:06:48.169603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.169645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:15.735 [2024-12-06 00:06:48.174434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.174485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:15.735 [2024-12-06 00:06:48.174499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:29:15.735 [2024-12-06 00:06:48.174511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.174799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.174821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:15.735 [2024-12-06 00:06:48.174834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:29:15.735 [2024-12-06 00:06:48.174843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.189587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.189641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:15.735 [2024-12-06 00:06:48.189654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.721 ms 00:29:15.735 [2024-12-06 00:06:48.189663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.196021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.196061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:15.735 [2024-12-06 00:06:48.196081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.318 ms 00:29:15.735 [2024-12-06 00:06:48.196090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.222515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.222562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:15.735 [2024-12-06 00:06:48.222575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.359 ms 00:29:15.735 [2024-12-06 00:06:48.222582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.238998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.239039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:15.735 [2024-12-06 00:06:48.239052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.369 ms 00:29:15.735 [2024-12-06 00:06:48.239061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.243497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.243541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:15.735 [2024-12-06 00:06:48.243553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.384 ms 00:29:15.735 [2024-12-06 00:06:48.243569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.269662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.269716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:15.735 [2024-12-06 00:06:48.269728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.075 ms 00:29:15.735 [2024-12-06 00:06:48.269736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.295181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.295225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:15.735 [2024-12-06 00:06:48.295236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.401 ms 00:29:15.735 [2024-12-06 00:06:48.295244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.319982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.320025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:15.735 [2024-12-06 00:06:48.320036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.694 ms 00:29:15.735 [2024-12-06 00:06:48.320043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.344571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.735 [2024-12-06 00:06:48.344615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:15.735 [2024-12-06 00:06:48.344626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.453 ms 00:29:15.735 [2024-12-06 00:06:48.344633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.735 [2024-12-06 00:06:48.344675] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:15.735 [2024-12-06 00:06:48.344691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:15.735 [2024-12-06 00:06:48.344702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:15.735 [2024-12-06 00:06:48.344711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.344991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.345000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.345008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.345016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.345023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.345031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:15.735 [2024-12-06 00:06:48.345039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:15.736 [2024-12-06 00:06:48.345555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:15.736 [2024-12-06 00:06:48.345563] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93384091-e3d9-48f1-bf76-2666e57b6b04 00:29:15.736 [2024-12-06 00:06:48.345571] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:15.736 [2024-12-06 00:06:48.345579] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 172480 00:29:15.736 [2024-12-06 00:06:48.345593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 170496 00:29:15.736 [2024-12-06 00:06:48.345602] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0116 00:29:15.736 [2024-12-06 00:06:48.345609] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:15.736 [2024-12-06 00:06:48.345625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:15.736 [2024-12-06 00:06:48.345632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:15.736 [2024-12-06 00:06:48.345639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:15.736 [2024-12-06 00:06:48.345646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:15.736 [2024-12-06 00:06:48.345653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.736 [2024-12-06 00:06:48.345662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:15.736 [2024-12-06 00:06:48.345671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:29:15.736 [2024-12-06 00:06:48.345679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.736 [2024-12-06 00:06:48.359247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.736 [2024-12-06 00:06:48.359289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:15.736 [2024-12-06 00:06:48.359301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.549 ms 00:29:15.736 [2024-12-06 00:06:48.359309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.736 [2024-12-06 00:06:48.359710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.736 [2024-12-06 00:06:48.359727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:15.736 [2024-12-06 00:06:48.359738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:29:15.736 [2024-12-06 00:06:48.359746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.736 [2024-12-06 00:06:48.396592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.736 [2024-12-06 00:06:48.396640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:15.736 [2024-12-06 00:06:48.396652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.736 [2024-12-06 00:06:48.396661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.736 [2024-12-06 00:06:48.396719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.736 [2024-12-06 00:06:48.396728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:15.736 [2024-12-06 00:06:48.396736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.736 [2024-12-06 00:06:48.396744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.736 [2024-12-06 00:06:48.396828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.736 [2024-12-06 00:06:48.396839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:15.736 [2024-12-06 00:06:48.396847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.736 [2024-12-06 00:06:48.396855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.736 [2024-12-06 00:06:48.396871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.736 [2024-12-06 00:06:48.396880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:15.736 [2024-12-06 00:06:48.396887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.736 [2024-12-06 00:06:48.396895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.996 [2024-12-06 00:06:48.482689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.997 [2024-12-06 00:06:48.482744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:15.997 [2024-12-06 00:06:48.482757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.997 [2024-12-06 00:06:48.482766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.997 [2024-12-06 00:06:48.552667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.997 [2024-12-06 00:06:48.552721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:15.997 [2024-12-06 00:06:48.552734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.997 [2024-12-06 00:06:48.552743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.997 [2024-12-06 00:06:48.552803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.997 [2024-12-06 00:06:48.552821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:15.997 [2024-12-06 00:06:48.552830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.997 [2024-12-06 00:06:48.552839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.997 [2024-12-06 00:06:48.552895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.997 [2024-12-06 00:06:48.552905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:15.997 [2024-12-06 00:06:48.552914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.997 [2024-12-06 00:06:48.552923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.997 [2024-12-06 00:06:48.553058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.997 [2024-12-06 00:06:48.553070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:15.997 [2024-12-06 00:06:48.553083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.997 [2024-12-06 00:06:48.553091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.997 [2024-12-06 00:06:48.553124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.997 [2024-12-06 00:06:48.553133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:15.997 [2024-12-06 00:06:48.553142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.997 [2024-12-06 00:06:48.553151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.997 [2024-12-06 00:06:48.553190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.997 [2024-12-06 00:06:48.553202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:15.997 [2024-12-06 00:06:48.553214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.997 [2024-12-06 00:06:48.553222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.997 [2024-12-06 00:06:48.553267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.997 [2024-12-06 00:06:48.553278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:15.997 [2024-12-06 00:06:48.553287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.997 [2024-12-06 00:06:48.553296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.997 [2024-12-06 00:06:48.553430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 383.971 ms, result 0 00:29:16.938 00:29:16.938 00:29:16.938 00:06:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:18.854 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:18.855 00:06:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:18.855 [2024-12-06 00:06:51.501153] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:29:18.855 [2024-12-06 00:06:51.501242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82795 ] 00:29:19.114 [2024-12-06 00:06:51.657641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.115 [2024-12-06 00:06:51.762815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.376 [2024-12-06 00:06:52.057648] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:19.376 [2024-12-06 00:06:52.057736] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:19.638 [2024-12-06 00:06:52.214433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.214480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:19.638 [2024-12-06 00:06:52.214493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:19.638 [2024-12-06 00:06:52.214501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.214548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.214560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:19.638 [2024-12-06 00:06:52.214569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:19.638 [2024-12-06 00:06:52.214576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.214593] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:19.638 [2024-12-06 00:06:52.215262] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:19.638 [2024-12-06 00:06:52.215278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.215286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:19.638 [2024-12-06 00:06:52.215295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:29:19.638 [2024-12-06 00:06:52.215302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.216755] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:19.638 [2024-12-06 00:06:52.229031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.229066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:19.638 [2024-12-06 00:06:52.229078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.278 ms 00:29:19.638 [2024-12-06 00:06:52.229087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.229143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.229152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:19.638 [2024-12-06 00:06:52.229161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:19.638 [2024-12-06 00:06:52.229168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.233894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.233923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:19.638 [2024-12-06 00:06:52.233933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.676 ms 00:29:19.638 [2024-12-06 00:06:52.233944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.234025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.234035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:19.638 [2024-12-06 00:06:52.234043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:19.638 [2024-12-06 00:06:52.234050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.234097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.234107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:19.638 [2024-12-06 00:06:52.234114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:19.638 [2024-12-06 00:06:52.234121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.234146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:19.638 [2024-12-06 00:06:52.237363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.237390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:19.638 [2024-12-06 00:06:52.237401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:29:19.638 [2024-12-06 00:06:52.237409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.237438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.237446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:19.638 [2024-12-06 00:06:52.237454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:19.638 [2024-12-06 00:06:52.237461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.237480] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:19.638 [2024-12-06 00:06:52.237498] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:19.638 [2024-12-06 00:06:52.237531] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:19.638 [2024-12-06 00:06:52.237548] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:19.638 [2024-12-06 00:06:52.237649] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:19.638 [2024-12-06 00:06:52.237661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:19.638 [2024-12-06 00:06:52.237672] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:19.638 [2024-12-06 00:06:52.237681] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:19.638 [2024-12-06 00:06:52.237690] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:19.638 [2024-12-06 00:06:52.237697] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:19.638 [2024-12-06 00:06:52.237705] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:19.638 [2024-12-06 00:06:52.237714] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:19.638 [2024-12-06 00:06:52.237721] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:19.638 [2024-12-06 00:06:52.237729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.237736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:19.638 [2024-12-06 00:06:52.237743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:29:19.638 [2024-12-06 00:06:52.237750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.237832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.638 [2024-12-06 00:06:52.237840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:19.638 [2024-12-06 00:06:52.237847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:19.638 [2024-12-06 00:06:52.237854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.638 [2024-12-06 00:06:52.237956] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:19.638 [2024-12-06 00:06:52.237981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:19.638 [2024-12-06 00:06:52.237989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:19.638 [2024-12-06 00:06:52.237997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.638 [2024-12-06 00:06:52.238004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:19.638 [2024-12-06 00:06:52.238012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:19.638 [2024-12-06 00:06:52.238019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:19.638 [2024-12-06 00:06:52.238026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:19.638 [2024-12-06 00:06:52.238032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:19.638 [2024-12-06 00:06:52.238038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:19.638 [2024-12-06 00:06:52.238045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:19.639 [2024-12-06 00:06:52.238052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:19.639 [2024-12-06 00:06:52.238058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:19.639 [2024-12-06 00:06:52.238069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:19.639 [2024-12-06 00:06:52.238076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:19.639 [2024-12-06 00:06:52.238083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.639 [2024-12-06 00:06:52.238090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:19.639 [2024-12-06 00:06:52.238096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:19.639 [2024-12-06 00:06:52.238102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.639 [2024-12-06 00:06:52.238109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:19.639 [2024-12-06 00:06:52.238115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:19.639 [2024-12-06 00:06:52.238122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.639 [2024-12-06 00:06:52.238128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:19.639 [2024-12-06 00:06:52.238134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:19.639 [2024-12-06 00:06:52.238140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.639 [2024-12-06 00:06:52.238147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:19.639 [2024-12-06 00:06:52.238153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:19.639 [2024-12-06 00:06:52.238159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.639 [2024-12-06 00:06:52.238165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:19.639 [2024-12-06 00:06:52.238172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:19.639 [2024-12-06 00:06:52.238177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.639 [2024-12-06 00:06:52.238184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:19.639 [2024-12-06 00:06:52.238190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:19.639 [2024-12-06 00:06:52.238196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:19.639 [2024-12-06 00:06:52.238202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:19.639 [2024-12-06 00:06:52.238209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:19.639 [2024-12-06 00:06:52.238215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:19.639 [2024-12-06 00:06:52.238221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:19.639 [2024-12-06 00:06:52.238227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:19.639 [2024-12-06 00:06:52.238234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.639 [2024-12-06 00:06:52.238240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:19.639 [2024-12-06 00:06:52.238246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:19.639 [2024-12-06 00:06:52.238252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.639 [2024-12-06 00:06:52.238259] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:19.639 [2024-12-06 00:06:52.238266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:19.639 [2024-12-06 00:06:52.238273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:19.639 [2024-12-06 00:06:52.238280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.639 [2024-12-06 00:06:52.238288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:19.639 [2024-12-06 00:06:52.238294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:19.639 [2024-12-06 00:06:52.238301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:19.639 [2024-12-06 00:06:52.238308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:19.639 [2024-12-06 00:06:52.238314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:19.639 [2024-12-06 00:06:52.238321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:19.639 [2024-12-06 00:06:52.238328] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:19.639 [2024-12-06 00:06:52.238337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.639 [2024-12-06 00:06:52.238347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:19.639 [2024-12-06 00:06:52.238354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:19.639 [2024-12-06 00:06:52.238361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:19.639 [2024-12-06 00:06:52.238368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:19.639 [2024-12-06 00:06:52.238375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:19.639 [2024-12-06 00:06:52.238381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:19.639 [2024-12-06 00:06:52.238388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:19.639 [2024-12-06 00:06:52.238394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:19.639 [2024-12-06 00:06:52.238401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:19.639 [2024-12-06 00:06:52.238407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:19.639 [2024-12-06 00:06:52.238414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:19.639 [2024-12-06 00:06:52.238421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:19.639 [2024-12-06 00:06:52.238427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:19.639 [2024-12-06 00:06:52.238435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:19.639 [2024-12-06 00:06:52.238442] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:19.639 [2024-12-06 00:06:52.238449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.639 [2024-12-06 00:06:52.238457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:19.639 [2024-12-06 00:06:52.238463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:19.639 [2024-12-06 00:06:52.238471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:19.639 [2024-12-06 00:06:52.238478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:19.639 [2024-12-06 00:06:52.238485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.639 [2024-12-06 00:06:52.238492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:19.639 [2024-12-06 00:06:52.238499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:29:19.639 [2024-12-06 00:06:52.238506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.639 [2024-12-06 00:06:52.264017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.639 [2024-12-06 00:06:52.264048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:19.639 [2024-12-06 00:06:52.264058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.458 ms 00:29:19.639 [2024-12-06 00:06:52.264069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.639 [2024-12-06 00:06:52.264166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.639 [2024-12-06 00:06:52.264175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:19.639 [2024-12-06 00:06:52.264183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:29:19.639 [2024-12-06 00:06:52.264190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.639 [2024-12-06 00:06:52.304329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.639 [2024-12-06 00:06:52.304366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:19.639 [2024-12-06 00:06:52.304378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.090 ms 00:29:19.639 [2024-12-06 00:06:52.304386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.639 [2024-12-06 00:06:52.304424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.639 [2024-12-06 00:06:52.304433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:19.639 [2024-12-06 00:06:52.304445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:19.639 [2024-12-06 00:06:52.304452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.639 [2024-12-06 00:06:52.304800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.639 [2024-12-06 00:06:52.304824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:19.639 [2024-12-06 00:06:52.304833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:29:19.639 [2024-12-06 00:06:52.304840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.639 [2024-12-06 00:06:52.304961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.639 [2024-12-06 00:06:52.304981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:19.639 [2024-12-06 00:06:52.304989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:19.639 [2024-12-06 00:06:52.305001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.639 [2024-12-06 00:06:52.318098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.639 [2024-12-06 00:06:52.318127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:19.639 [2024-12-06 00:06:52.318139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.078 ms 00:29:19.639 [2024-12-06 00:06:52.318146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.639 [2024-12-06 00:06:52.330914] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:19.639 [2024-12-06 00:06:52.330946] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:19.639 [2024-12-06 00:06:52.330958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.639 [2024-12-06 00:06:52.330979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:19.640 [2024-12-06 00:06:52.330988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.710 ms 00:29:19.640 [2024-12-06 00:06:52.330995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.355305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.355337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:19.901 [2024-12-06 00:06:52.355348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.272 ms 00:29:19.901 [2024-12-06 00:06:52.355355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.367299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.367330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:19.901 [2024-12-06 00:06:52.367340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.901 ms 00:29:19.901 [2024-12-06 00:06:52.367346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.378978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.379008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:19.901 [2024-12-06 00:06:52.379018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.600 ms 00:29:19.901 [2024-12-06 00:06:52.379025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.379608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.379627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:19.901 [2024-12-06 00:06:52.379638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:29:19.901 [2024-12-06 00:06:52.379645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.436298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.436344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:19.901 [2024-12-06 00:06:52.436361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.635 ms 00:29:19.901 [2024-12-06 00:06:52.436369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.446558] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:19.901 [2024-12-06 00:06:52.448954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.448993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:19.901 [2024-12-06 00:06:52.449004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.542 ms 00:29:19.901 [2024-12-06 00:06:52.449012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.449095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.449106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:19.901 [2024-12-06 00:06:52.449117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:19.901 [2024-12-06 00:06:52.449125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.449745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.449765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:19.901 [2024-12-06 00:06:52.449774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:29:19.901 [2024-12-06 00:06:52.449781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.449804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.449813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:19.901 [2024-12-06 00:06:52.449820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:19.901 [2024-12-06 00:06:52.449828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.449863] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:19.901 [2024-12-06 00:06:52.449873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.449880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:19.901 [2024-12-06 00:06:52.449888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:19.901 [2024-12-06 00:06:52.449895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.474373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.474508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:19.901 [2024-12-06 00:06:52.474571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.460 ms 00:29:19.901 [2024-12-06 00:06:52.474595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.474672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.901 [2024-12-06 00:06:52.474697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:19.901 [2024-12-06 00:06:52.474718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:19.901 [2024-12-06 00:06:52.474736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.901 [2024-12-06 00:06:52.475744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.878 ms, result 0 00:29:21.284  [2024-12-06T00:06:54.937Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-06T00:06:55.880Z] Copying: 36/1024 [MB] (18 MBps) [2024-12-06T00:06:56.825Z] Copying: 60/1024 [MB] (23 MBps) [2024-12-06T00:06:57.763Z] Copying: 76/1024 [MB] (16 MBps) [2024-12-06T00:06:58.704Z] Copying: 98/1024 [MB] (21 MBps) [2024-12-06T00:07:00.086Z] Copying: 121/1024 [MB] (22 MBps) [2024-12-06T00:07:00.658Z] Copying: 140/1024 [MB] (18 MBps) [2024-12-06T00:07:02.048Z] Copying: 155/1024 [MB] (15 MBps) [2024-12-06T00:07:02.993Z] Copying: 168/1024 [MB] (12 MBps) [2024-12-06T00:07:03.938Z] Copying: 178/1024 [MB] (10 MBps) [2024-12-06T00:07:04.883Z] Copying: 190/1024 [MB] (12 MBps) [2024-12-06T00:07:05.824Z] Copying: 205/1024 [MB] (14 MBps) [2024-12-06T00:07:06.766Z] Copying: 216/1024 [MB] (11 MBps) [2024-12-06T00:07:07.711Z] Copying: 233/1024 [MB] (16 MBps) [2024-12-06T00:07:08.655Z] Copying: 245/1024 [MB] (12 MBps) [2024-12-06T00:07:10.037Z] Copying: 257/1024 [MB] (12 MBps) [2024-12-06T00:07:10.983Z] Copying: 269/1024 [MB] (11 MBps) [2024-12-06T00:07:11.927Z] Copying: 285/1024 [MB] (15 MBps) [2024-12-06T00:07:12.871Z] Copying: 301/1024 [MB] (16 MBps) [2024-12-06T00:07:13.874Z] Copying: 316/1024 [MB] (14 MBps) [2024-12-06T00:07:14.818Z] Copying: 331/1024 [MB] (14 MBps) [2024-12-06T00:07:15.754Z] Copying: 341/1024 [MB] (10 MBps) [2024-12-06T00:07:16.698Z] Copying: 355/1024 [MB] (13 MBps) [2024-12-06T00:07:18.086Z] Copying: 367/1024 [MB] (11 MBps) [2024-12-06T00:07:18.658Z] Copying: 378/1024 [MB] (11 MBps) [2024-12-06T00:07:20.046Z] Copying: 389/1024 [MB] (11 MBps) [2024-12-06T00:07:20.987Z] Copying: 406/1024 [MB] (17 MBps) [2024-12-06T00:07:21.928Z] Copying: 422/1024 [MB] (16 MBps) [2024-12-06T00:07:22.872Z] Copying: 433/1024 [MB] (10 MBps) [2024-12-06T00:07:23.818Z] Copying: 447/1024 [MB] (13 MBps) [2024-12-06T00:07:24.763Z] Copying: 467/1024 [MB] (19 MBps) [2024-12-06T00:07:25.705Z] Copying: 480/1024 [MB] (13 MBps) [2024-12-06T00:07:27.092Z] Copying: 499/1024 [MB] (19 MBps) [2024-12-06T00:07:27.718Z] Copying: 518/1024 [MB] (19 MBps) [2024-12-06T00:07:28.661Z] Copying: 539/1024 [MB] (21 MBps) [2024-12-06T00:07:30.048Z] Copying: 561/1024 [MB] (21 MBps) [2024-12-06T00:07:30.988Z] Copying: 575/1024 [MB] (14 MBps) [2024-12-06T00:07:31.933Z] Copying: 588/1024 [MB] (13 MBps) [2024-12-06T00:07:32.876Z] Copying: 601/1024 [MB] (12 MBps) [2024-12-06T00:07:33.818Z] Copying: 615/1024 [MB] (13 MBps) [2024-12-06T00:07:34.761Z] Copying: 630/1024 [MB] (15 MBps) [2024-12-06T00:07:35.857Z] Copying: 641/1024 [MB] (10 MBps) [2024-12-06T00:07:36.799Z] Copying: 657/1024 [MB] (15 MBps) [2024-12-06T00:07:37.740Z] Copying: 676/1024 [MB] (18 MBps) [2024-12-06T00:07:38.683Z] Copying: 693/1024 [MB] (17 MBps) [2024-12-06T00:07:40.080Z] Copying: 711/1024 [MB] (17 MBps) [2024-12-06T00:07:41.023Z] Copying: 729/1024 [MB] (18 MBps) [2024-12-06T00:07:41.968Z] Copying: 743/1024 [MB] (14 MBps) [2024-12-06T00:07:42.912Z] Copying: 759/1024 [MB] (15 MBps) [2024-12-06T00:07:43.856Z] Copying: 771/1024 [MB] (12 MBps) [2024-12-06T00:07:44.801Z] Copying: 783/1024 [MB] (11 MBps) [2024-12-06T00:07:45.744Z] Copying: 795/1024 [MB] (12 MBps) [2024-12-06T00:07:46.687Z] Copying: 810/1024 [MB] (14 MBps) [2024-12-06T00:07:48.075Z] Copying: 828/1024 [MB] (18 MBps) [2024-12-06T00:07:49.019Z] Copying: 846/1024 [MB] (17 MBps) [2024-12-06T00:07:49.962Z] Copying: 865/1024 [MB] (19 MBps) [2024-12-06T00:07:50.908Z] Copying: 879/1024 [MB] (13 MBps) [2024-12-06T00:07:51.850Z] Copying: 893/1024 [MB] (14 MBps) [2024-12-06T00:07:52.791Z] Copying: 910/1024 [MB] (16 MBps) [2024-12-06T00:07:53.732Z] Copying: 923/1024 [MB] (13 MBps) [2024-12-06T00:07:54.672Z] Copying: 943/1024 [MB] (19 MBps) [2024-12-06T00:07:56.052Z] Copying: 959/1024 [MB] (15 MBps) [2024-12-06T00:07:56.992Z] Copying: 969/1024 [MB] (10 MBps) [2024-12-06T00:07:57.937Z] Copying: 980/1024 [MB] (10 MBps) [2024-12-06T00:07:58.882Z] Copying: 990/1024 [MB] (10 MBps) [2024-12-06T00:07:59.456Z] Copying: 1007/1024 [MB] (16 MBps) [2024-12-06T00:07:59.456Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-06 00:07:59.314831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.747 [2024-12-06 00:07:59.314995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:26.747 [2024-12-06 00:07:59.315022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:26.747 [2024-12-06 00:07:59.315033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.747 [2024-12-06 00:07:59.315064] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:26.747 [2024-12-06 00:07:59.318881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.747 [2024-12-06 00:07:59.319151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:26.747 [2024-12-06 00:07:59.319177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.796 ms 00:30:26.747 [2024-12-06 00:07:59.319188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.747 [2024-12-06 00:07:59.319489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.747 [2024-12-06 00:07:59.319504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:26.747 [2024-12-06 00:07:59.319516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:30:26.747 [2024-12-06 00:07:59.319526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.747 [2024-12-06 00:07:59.324760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.747 [2024-12-06 00:07:59.324799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:26.747 [2024-12-06 00:07:59.324811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.213 ms 00:30:26.747 [2024-12-06 00:07:59.324832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.747 [2024-12-06 00:07:59.331429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.747 [2024-12-06 00:07:59.331467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:26.747 [2024-12-06 00:07:59.331478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.573 ms 00:30:26.747 [2024-12-06 00:07:59.331486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.747 [2024-12-06 00:07:59.358253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.747 [2024-12-06 00:07:59.358440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:26.747 [2024-12-06 00:07:59.358461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.700 ms 00:30:26.747 [2024-12-06 00:07:59.358469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.747 [2024-12-06 00:07:59.374677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.747 [2024-12-06 00:07:59.374730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:26.747 [2024-12-06 00:07:59.374745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.886 ms 00:30:26.747 [2024-12-06 00:07:59.374755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.747 [2024-12-06 00:07:59.379580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.747 [2024-12-06 00:07:59.379627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:26.747 [2024-12-06 00:07:59.379639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.764 ms 00:30:26.747 [2024-12-06 00:07:59.379647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.747 [2024-12-06 00:07:59.405723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.747 [2024-12-06 00:07:59.405917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:26.747 [2024-12-06 00:07:59.405938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.060 ms 00:30:26.747 [2024-12-06 00:07:59.405945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.747 [2024-12-06 00:07:59.431622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.747 [2024-12-06 00:07:59.431668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:26.747 [2024-12-06 00:07:59.431680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.617 ms 00:30:26.747 [2024-12-06 00:07:59.431687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.011 [2024-12-06 00:07:59.456463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.011 [2024-12-06 00:07:59.456509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:27.011 [2024-12-06 00:07:59.456522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.731 ms 00:30:27.011 [2024-12-06 00:07:59.456529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.011 [2024-12-06 00:07:59.481268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.011 [2024-12-06 00:07:59.481314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:27.011 [2024-12-06 00:07:59.481326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.666 ms 00:30:27.011 [2024-12-06 00:07:59.481334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.011 [2024-12-06 00:07:59.481376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:27.011 [2024-12-06 00:07:59.481401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.011 [2024-12-06 00:07:59.481415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:27.011 [2024-12-06 00:07:59.481424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.481994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:27.011 [2024-12-06 00:07:59.482002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:27.012 [2024-12-06 00:07:59.482223] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:27.012 [2024-12-06 00:07:59.482231] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 93384091-e3d9-48f1-bf76-2666e57b6b04 00:30:27.012 [2024-12-06 00:07:59.482240] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:27.012 [2024-12-06 00:07:59.482248] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:27.012 [2024-12-06 00:07:59.482255] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:27.012 [2024-12-06 00:07:59.482263] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:27.012 [2024-12-06 00:07:59.482277] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:27.012 [2024-12-06 00:07:59.482286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:27.012 [2024-12-06 00:07:59.482304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:27.012 [2024-12-06 00:07:59.482311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:27.012 [2024-12-06 00:07:59.482318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:27.012 [2024-12-06 00:07:59.482326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.012 [2024-12-06 00:07:59.482333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:27.012 [2024-12-06 00:07:59.482343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:30:27.012 [2024-12-06 00:07:59.482353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.496236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.012 [2024-12-06 00:07:59.496277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:27.012 [2024-12-06 00:07:59.496288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.864 ms 00:30:27.012 [2024-12-06 00:07:59.496296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.496691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.012 [2024-12-06 00:07:59.496710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:27.012 [2024-12-06 00:07:59.496720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:30:27.012 [2024-12-06 00:07:59.496728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.533360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.533411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:27.012 [2024-12-06 00:07:59.533424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.533433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.533501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.533518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:27.012 [2024-12-06 00:07:59.533529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.533538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.533632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.533644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:27.012 [2024-12-06 00:07:59.533655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.533664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.533681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.533692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:27.012 [2024-12-06 00:07:59.533704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.533712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.619102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.619158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:27.012 [2024-12-06 00:07:59.619171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.619180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.688323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.688385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:27.012 [2024-12-06 00:07:59.688397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.688405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.688464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.688474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:27.012 [2024-12-06 00:07:59.688483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.688492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.688551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.688562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:27.012 [2024-12-06 00:07:59.688571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.688584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.688684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.688694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:27.012 [2024-12-06 00:07:59.688703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.688712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.688746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.688756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:27.012 [2024-12-06 00:07:59.688765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.688773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.688820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.688831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:27.012 [2024-12-06 00:07:59.688840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.688848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.688895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.012 [2024-12-06 00:07:59.688906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:27.012 [2024-12-06 00:07:59.688916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.012 [2024-12-06 00:07:59.688928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.012 [2024-12-06 00:07:59.689102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 374.237 ms, result 0 00:30:27.956 00:30:27.956 00:30:27.956 00:08:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:30.497 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:30.497 Process with pid 80927 is not found 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80927 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80927 ']' 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80927 00:30:30.497 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80927) - No such process 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80927 is not found' 00:30:30.497 00:08:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:30.497 Remove shared memory files 00:30:30.497 00:08:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:30.497 00:08:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:30.497 00:08:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:30.497 00:08:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:30.497 00:08:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:30.497 00:08:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:30.497 00:08:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:30.497 ************************************ 00:30:30.497 END TEST ftl_dirty_shutdown 00:30:30.497 ************************************ 00:30:30.497 00:30:30.497 real 4m9.976s 00:30:30.497 user 4m32.645s 00:30:30.497 sys 0m26.753s 00:30:30.497 00:08:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.497 00:08:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:30.497 00:08:03 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:30.497 00:08:03 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:30.497 00:08:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.497 00:08:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:30.497 ************************************ 00:30:30.498 START TEST ftl_upgrade_shutdown 00:30:30.498 ************************************ 00:30:30.498 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:30.758 * Looking for test storage... 00:30:30.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.758 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:30.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.759 --rc genhtml_branch_coverage=1 00:30:30.759 --rc genhtml_function_coverage=1 00:30:30.759 --rc genhtml_legend=1 00:30:30.759 --rc geninfo_all_blocks=1 00:30:30.759 --rc geninfo_unexecuted_blocks=1 00:30:30.759 00:30:30.759 ' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:30.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.759 --rc genhtml_branch_coverage=1 00:30:30.759 --rc genhtml_function_coverage=1 00:30:30.759 --rc genhtml_legend=1 00:30:30.759 --rc geninfo_all_blocks=1 00:30:30.759 --rc geninfo_unexecuted_blocks=1 00:30:30.759 00:30:30.759 ' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:30.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.759 --rc genhtml_branch_coverage=1 00:30:30.759 --rc genhtml_function_coverage=1 00:30:30.759 --rc genhtml_legend=1 00:30:30.759 --rc geninfo_all_blocks=1 00:30:30.759 --rc geninfo_unexecuted_blocks=1 00:30:30.759 00:30:30.759 ' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:30.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.759 --rc genhtml_branch_coverage=1 00:30:30.759 --rc genhtml_function_coverage=1 00:30:30.759 --rc genhtml_legend=1 00:30:30.759 --rc geninfo_all_blocks=1 00:30:30.759 --rc geninfo_unexecuted_blocks=1 00:30:30.759 00:30:30.759 ' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83582 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83582 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83582 ']' 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.759 00:08:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:30.759 [2024-12-06 00:08:03.417497] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:30.759 [2024-12-06 00:08:03.417760] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83582 ] 00:30:31.020 [2024-12-06 00:08:03.577064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.020 [2024-12-06 00:08:03.677808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:31.964 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:32.226 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:32.226 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:32.226 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:32.226 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:32.226 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:32.226 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:32.226 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:32.226 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:32.226 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:32.226 { 00:30:32.226 "name": "basen1", 00:30:32.226 "aliases": [ 00:30:32.226 "cbbfe153-103d-4a3a-9908-566caadb0ebb" 00:30:32.226 ], 00:30:32.226 "product_name": "NVMe disk", 00:30:32.226 "block_size": 4096, 00:30:32.226 "num_blocks": 1310720, 00:30:32.226 "uuid": "cbbfe153-103d-4a3a-9908-566caadb0ebb", 00:30:32.226 "numa_id": -1, 00:30:32.226 "assigned_rate_limits": { 00:30:32.226 "rw_ios_per_sec": 0, 00:30:32.226 "rw_mbytes_per_sec": 0, 00:30:32.226 "r_mbytes_per_sec": 0, 00:30:32.226 "w_mbytes_per_sec": 0 00:30:32.226 }, 00:30:32.226 "claimed": true, 00:30:32.226 "claim_type": "read_many_write_one", 00:30:32.226 "zoned": false, 00:30:32.226 "supported_io_types": { 00:30:32.226 "read": true, 00:30:32.226 "write": true, 00:30:32.226 "unmap": true, 00:30:32.226 "flush": true, 00:30:32.226 "reset": true, 00:30:32.226 "nvme_admin": true, 00:30:32.226 "nvme_io": true, 00:30:32.226 "nvme_io_md": false, 00:30:32.226 "write_zeroes": true, 00:30:32.226 "zcopy": false, 00:30:32.226 "get_zone_info": false, 00:30:32.226 "zone_management": false, 00:30:32.226 "zone_append": false, 00:30:32.226 "compare": true, 00:30:32.226 "compare_and_write": false, 00:30:32.226 "abort": true, 00:30:32.226 "seek_hole": false, 00:30:32.226 "seek_data": false, 00:30:32.226 "copy": true, 00:30:32.226 "nvme_iov_md": false 00:30:32.226 }, 00:30:32.226 "driver_specific": { 00:30:32.226 "nvme": [ 00:30:32.226 { 00:30:32.226 "pci_address": "0000:00:11.0", 00:30:32.226 "trid": { 00:30:32.226 "trtype": "PCIe", 00:30:32.226 "traddr": "0000:00:11.0" 00:30:32.226 }, 00:30:32.226 "ctrlr_data": { 00:30:32.226 "cntlid": 0, 00:30:32.226 "vendor_id": "0x1b36", 00:30:32.226 "model_number": "QEMU NVMe Ctrl", 00:30:32.226 "serial_number": "12341", 00:30:32.226 "firmware_revision": "8.0.0", 00:30:32.226 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:32.226 "oacs": { 00:30:32.226 "security": 0, 00:30:32.226 "format": 1, 00:30:32.226 "firmware": 0, 00:30:32.226 "ns_manage": 1 00:30:32.226 }, 00:30:32.226 "multi_ctrlr": false, 00:30:32.226 "ana_reporting": false 00:30:32.226 }, 00:30:32.226 "vs": { 00:30:32.226 "nvme_version": "1.4" 00:30:32.226 }, 00:30:32.226 "ns_data": { 00:30:32.226 "id": 1, 00:30:32.226 "can_share": false 00:30:32.226 } 00:30:32.226 } 00:30:32.226 ], 00:30:32.226 "mp_policy": "active_passive" 00:30:32.226 } 00:30:32.226 } 00:30:32.226 ]' 00:30:32.226 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:32.488 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:32.488 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:32.488 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:32.488 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:32.488 00:08:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:32.488 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:32.488 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:32.488 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:32.488 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:32.488 00:08:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:32.488 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=e09870e9-03a2-4186-879e-099f7ea4cc86 00:30:32.488 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:32.488 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e09870e9-03a2-4186-879e-099f7ea4cc86 00:30:32.749 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:33.008 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=57622697-33a1-4db6-b776-4f1d7be26bf1 00:30:33.008 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 57622697-33a1-4db6-b776-4f1d7be26bf1 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=184dcf83-1acf-4834-a92e-35a7d63a23d9 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 184dcf83-1acf-4834-a92e-35a7d63a23d9 ]] 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 184dcf83-1acf-4834-a92e-35a7d63a23d9 5120 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=184dcf83-1acf-4834-a92e-35a7d63a23d9 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 184dcf83-1acf-4834-a92e-35a7d63a23d9 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=184dcf83-1acf-4834-a92e-35a7d63a23d9 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:33.268 00:08:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 184dcf83-1acf-4834-a92e-35a7d63a23d9 00:30:33.528 00:08:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:33.528 { 00:30:33.528 "name": "184dcf83-1acf-4834-a92e-35a7d63a23d9", 00:30:33.528 "aliases": [ 00:30:33.528 "lvs/basen1p0" 00:30:33.528 ], 00:30:33.528 "product_name": "Logical Volume", 00:30:33.528 "block_size": 4096, 00:30:33.528 "num_blocks": 5242880, 00:30:33.528 "uuid": "184dcf83-1acf-4834-a92e-35a7d63a23d9", 00:30:33.528 "assigned_rate_limits": { 00:30:33.528 "rw_ios_per_sec": 0, 00:30:33.528 "rw_mbytes_per_sec": 0, 00:30:33.528 "r_mbytes_per_sec": 0, 00:30:33.528 "w_mbytes_per_sec": 0 00:30:33.528 }, 00:30:33.528 "claimed": false, 00:30:33.528 "zoned": false, 00:30:33.528 "supported_io_types": { 00:30:33.528 "read": true, 00:30:33.528 "write": true, 00:30:33.528 "unmap": true, 00:30:33.528 "flush": false, 00:30:33.528 "reset": true, 00:30:33.528 "nvme_admin": false, 00:30:33.528 "nvme_io": false, 00:30:33.528 "nvme_io_md": false, 00:30:33.528 "write_zeroes": true, 00:30:33.528 "zcopy": false, 00:30:33.528 "get_zone_info": false, 00:30:33.528 "zone_management": false, 00:30:33.528 "zone_append": false, 00:30:33.528 "compare": false, 00:30:33.528 "compare_and_write": false, 00:30:33.528 "abort": false, 00:30:33.528 "seek_hole": true, 00:30:33.528 "seek_data": true, 00:30:33.528 "copy": false, 00:30:33.528 "nvme_iov_md": false 00:30:33.528 }, 00:30:33.528 "driver_specific": { 00:30:33.528 "lvol": { 00:30:33.528 "lvol_store_uuid": "57622697-33a1-4db6-b776-4f1d7be26bf1", 00:30:33.528 "base_bdev": "basen1", 00:30:33.528 "thin_provision": true, 00:30:33.528 "num_allocated_clusters": 0, 00:30:33.528 "snapshot": false, 00:30:33.528 "clone": false, 00:30:33.528 "esnap_clone": false 00:30:33.528 } 00:30:33.528 } 00:30:33.528 } 00:30:33.528 ]' 00:30:33.528 00:08:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:33.528 00:08:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:33.528 00:08:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:33.528 00:08:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:33.528 00:08:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:33.528 00:08:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:33.528 00:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:33.528 00:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:33.528 00:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:33.789 00:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:33.789 00:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:33.789 00:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:34.049 00:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:34.049 00:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:34.049 00:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 184dcf83-1acf-4834-a92e-35a7d63a23d9 -c cachen1p0 --l2p_dram_limit 2 00:30:34.310 [2024-12-06 00:08:06.832320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.310 [2024-12-06 00:08:06.832362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:34.310 [2024-12-06 00:08:06.832375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:34.310 [2024-12-06 00:08:06.832381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.310 [2024-12-06 00:08:06.832429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.310 [2024-12-06 00:08:06.832437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:34.310 [2024-12-06 00:08:06.832445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:34.310 [2024-12-06 00:08:06.832452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.310 [2024-12-06 00:08:06.832468] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:34.310 [2024-12-06 00:08:06.833109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:34.310 [2024-12-06 00:08:06.833126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.310 [2024-12-06 00:08:06.833132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:34.310 [2024-12-06 00:08:06.833141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.660 ms 00:30:34.310 [2024-12-06 00:08:06.833147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.310 [2024-12-06 00:08:06.833198] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 895f9941-c2a4-4387-825f-4634a31cfd5e 00:30:34.310 [2024-12-06 00:08:06.834143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.310 [2024-12-06 00:08:06.834166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:34.310 [2024-12-06 00:08:06.834174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:34.310 [2024-12-06 00:08:06.834181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.310 [2024-12-06 00:08:06.838834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.311 [2024-12-06 00:08:06.838865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:34.311 [2024-12-06 00:08:06.838872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.621 ms 00:30:34.311 [2024-12-06 00:08:06.838879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.311 [2024-12-06 00:08:06.838910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.311 [2024-12-06 00:08:06.838918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:34.311 [2024-12-06 00:08:06.838924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:34.311 [2024-12-06 00:08:06.838932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.311 [2024-12-06 00:08:06.838976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.311 [2024-12-06 00:08:06.838986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:34.311 [2024-12-06 00:08:06.838994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:34.311 [2024-12-06 00:08:06.839000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.311 [2024-12-06 00:08:06.839017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:34.311 [2024-12-06 00:08:06.841841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.311 [2024-12-06 00:08:06.841865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:34.311 [2024-12-06 00:08:06.841874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.827 ms 00:30:34.311 [2024-12-06 00:08:06.841880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.311 [2024-12-06 00:08:06.841902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.311 [2024-12-06 00:08:06.841908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:34.311 [2024-12-06 00:08:06.841916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:34.311 [2024-12-06 00:08:06.841922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.311 [2024-12-06 00:08:06.841936] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:34.311 [2024-12-06 00:08:06.842059] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:34.311 [2024-12-06 00:08:06.842071] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:34.311 [2024-12-06 00:08:06.842079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:34.311 [2024-12-06 00:08:06.842088] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:34.311 [2024-12-06 00:08:06.842095] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:34.311 [2024-12-06 00:08:06.842103] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:34.311 [2024-12-06 00:08:06.842108] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:34.311 [2024-12-06 00:08:06.842117] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:34.311 [2024-12-06 00:08:06.842123] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:34.311 [2024-12-06 00:08:06.842130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.311 [2024-12-06 00:08:06.842136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:34.311 [2024-12-06 00:08:06.842144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:30:34.311 [2024-12-06 00:08:06.842149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.311 [2024-12-06 00:08:06.842214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.311 [2024-12-06 00:08:06.842226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:34.311 [2024-12-06 00:08:06.842233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:34.311 [2024-12-06 00:08:06.842238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.311 [2024-12-06 00:08:06.842316] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:34.311 [2024-12-06 00:08:06.842323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:34.311 [2024-12-06 00:08:06.842331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:34.311 [2024-12-06 00:08:06.842337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:34.311 [2024-12-06 00:08:06.842350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:34.311 [2024-12-06 00:08:06.842361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:34.311 [2024-12-06 00:08:06.842367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:34.311 [2024-12-06 00:08:06.842372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:34.311 [2024-12-06 00:08:06.842386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:34.311 [2024-12-06 00:08:06.842392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:34.311 [2024-12-06 00:08:06.842404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:34.311 [2024-12-06 00:08:06.842409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:34.311 [2024-12-06 00:08:06.842422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:34.311 [2024-12-06 00:08:06.842428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:34.311 [2024-12-06 00:08:06.842439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:34.311 [2024-12-06 00:08:06.842444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.311 [2024-12-06 00:08:06.842450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:34.311 [2024-12-06 00:08:06.842455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:34.311 [2024-12-06 00:08:06.842462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.311 [2024-12-06 00:08:06.842467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:34.311 [2024-12-06 00:08:06.842474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:34.311 [2024-12-06 00:08:06.842479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.311 [2024-12-06 00:08:06.842485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:34.311 [2024-12-06 00:08:06.842490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:34.311 [2024-12-06 00:08:06.842496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.311 [2024-12-06 00:08:06.842501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:34.311 [2024-12-06 00:08:06.842508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:34.311 [2024-12-06 00:08:06.842513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:34.311 [2024-12-06 00:08:06.842525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:34.311 [2024-12-06 00:08:06.842532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:34.311 [2024-12-06 00:08:06.842543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:34.311 [2024-12-06 00:08:06.842559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:34.311 [2024-12-06 00:08:06.842565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842570] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:34.311 [2024-12-06 00:08:06.842577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:34.311 [2024-12-06 00:08:06.842582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:34.311 [2024-12-06 00:08:06.842590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.311 [2024-12-06 00:08:06.842596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:34.311 [2024-12-06 00:08:06.842603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:34.311 [2024-12-06 00:08:06.842608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:34.311 [2024-12-06 00:08:06.842615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:34.311 [2024-12-06 00:08:06.842619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:34.311 [2024-12-06 00:08:06.842626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:34.311 [2024-12-06 00:08:06.842632] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:34.311 [2024-12-06 00:08:06.842642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:34.311 [2024-12-06 00:08:06.842648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:34.311 [2024-12-06 00:08:06.842657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:34.311 [2024-12-06 00:08:06.842663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:34.311 [2024-12-06 00:08:06.842669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:34.311 [2024-12-06 00:08:06.842675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:34.311 [2024-12-06 00:08:06.842682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:34.311 [2024-12-06 00:08:06.842687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:34.311 [2024-12-06 00:08:06.842695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:34.311 [2024-12-06 00:08:06.842701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:34.312 [2024-12-06 00:08:06.842709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:34.312 [2024-12-06 00:08:06.842714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:34.312 [2024-12-06 00:08:06.842721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:34.312 [2024-12-06 00:08:06.842726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:34.312 [2024-12-06 00:08:06.842733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:34.312 [2024-12-06 00:08:06.842738] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:34.312 [2024-12-06 00:08:06.842745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:34.312 [2024-12-06 00:08:06.842752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:34.312 [2024-12-06 00:08:06.842759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:34.312 [2024-12-06 00:08:06.842764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:34.312 [2024-12-06 00:08:06.842771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:34.312 [2024-12-06 00:08:06.842777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.312 [2024-12-06 00:08:06.842783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:34.312 [2024-12-06 00:08:06.842789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.514 ms 00:30:34.312 [2024-12-06 00:08:06.842795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.312 [2024-12-06 00:08:06.842823] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:34.312 [2024-12-06 00:08:06.842832] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:38.521 [2024-12-06 00:08:10.371386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.371445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:38.521 [2024-12-06 00:08:10.371461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3528.549 ms 00:30:38.521 [2024-12-06 00:08:10.371471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.398516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.398710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:38.521 [2024-12-06 00:08:10.398731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.833 ms 00:30:38.521 [2024-12-06 00:08:10.398742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.398821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.398834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:38.521 [2024-12-06 00:08:10.398843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:38.521 [2024-12-06 00:08:10.398859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.430765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.430808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:38.521 [2024-12-06 00:08:10.430819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.871 ms 00:30:38.521 [2024-12-06 00:08:10.430830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.430859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.430872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:38.521 [2024-12-06 00:08:10.430880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:38.521 [2024-12-06 00:08:10.430889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.431329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.431357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:38.521 [2024-12-06 00:08:10.431373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.390 ms 00:30:38.521 [2024-12-06 00:08:10.431382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.431422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.431433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:38.521 [2024-12-06 00:08:10.431443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:38.521 [2024-12-06 00:08:10.431455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.446526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.446564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:38.521 [2024-12-06 00:08:10.446575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.053 ms 00:30:38.521 [2024-12-06 00:08:10.446584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.479381] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:38.521 [2024-12-06 00:08:10.480746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.480784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:38.521 [2024-12-06 00:08:10.480798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.069 ms 00:30:38.521 [2024-12-06 00:08:10.480806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.505352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.505395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:38.521 [2024-12-06 00:08:10.505409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.505 ms 00:30:38.521 [2024-12-06 00:08:10.505417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.505508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.505522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:38.521 [2024-12-06 00:08:10.505535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:30:38.521 [2024-12-06 00:08:10.505543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.530119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.530160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:38.521 [2024-12-06 00:08:10.530174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.521 ms 00:30:38.521 [2024-12-06 00:08:10.530182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.554301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.554345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:38.521 [2024-12-06 00:08:10.554361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.067 ms 00:30:38.521 [2024-12-06 00:08:10.554369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.554955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.554984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:38.521 [2024-12-06 00:08:10.554997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:30:38.521 [2024-12-06 00:08:10.555006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.643590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.643646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:38.521 [2024-12-06 00:08:10.643667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 88.517 ms 00:30:38.521 [2024-12-06 00:08:10.643675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.671944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.672011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:38.521 [2024-12-06 00:08:10.672028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.160 ms 00:30:38.521 [2024-12-06 00:08:10.672036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.699457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.699511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:38.521 [2024-12-06 00:08:10.699528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.359 ms 00:30:38.521 [2024-12-06 00:08:10.699535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.726410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.726464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:38.521 [2024-12-06 00:08:10.726480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.816 ms 00:30:38.521 [2024-12-06 00:08:10.726487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.521 [2024-12-06 00:08:10.726547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.521 [2024-12-06 00:08:10.726556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:38.521 [2024-12-06 00:08:10.726571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:38.521 [2024-12-06 00:08:10.726579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.522 [2024-12-06 00:08:10.726689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.522 [2024-12-06 00:08:10.726703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:38.522 [2024-12-06 00:08:10.726714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:38.522 [2024-12-06 00:08:10.726722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.522 [2024-12-06 00:08:10.728056] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3895.197 ms, result 0 00:30:38.522 { 00:30:38.522 "name": "ftl", 00:30:38.522 "uuid": "895f9941-c2a4-4387-825f-4634a31cfd5e" 00:30:38.522 } 00:30:38.522 00:08:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:38.522 [2024-12-06 00:08:10.943015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.522 00:08:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:38.522 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:38.783 [2024-12-06 00:08:11.359444] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:38.783 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:39.045 [2024-12-06 00:08:11.576430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:39.045 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:39.306 Fill FTL, iteration 1 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:39.306 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83710 00:30:39.307 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:39.307 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83710 /var/tmp/spdk.tgt.sock 00:30:39.307 00:08:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:39.307 00:08:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83710 ']' 00:30:39.307 00:08:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:39.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:39.307 00:08:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.307 00:08:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:39.307 00:08:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.307 00:08:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:39.568 [2024-12-06 00:08:12.024269] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:39.568 [2024-12-06 00:08:12.024453] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83710 ] 00:30:39.569 [2024-12-06 00:08:12.182075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.830 [2024-12-06 00:08:12.300225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.404 00:08:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.404 00:08:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:40.404 00:08:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:40.666 ftln1 00:30:40.666 00:08:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:40.666 00:08:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:40.666 00:08:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:40.666 00:08:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83710 00:30:40.666 00:08:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83710 ']' 00:30:40.666 00:08:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83710 00:30:40.666 00:08:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:40.927 00:08:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:40.927 00:08:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83710 00:30:40.927 killing process with pid 83710 00:30:40.927 00:08:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:40.927 00:08:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:40.927 00:08:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83710' 00:30:40.927 00:08:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83710 00:30:40.927 00:08:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83710 00:30:42.311 00:08:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:42.311 00:08:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:42.311 [2024-12-06 00:08:14.884304] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:42.311 [2024-12-06 00:08:14.884418] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83752 ] 00:30:42.573 [2024-12-06 00:08:15.045982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.573 [2024-12-06 00:08:15.139532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.047  [2024-12-06T00:08:17.700Z] Copying: 217/1024 [MB] (217 MBps) [2024-12-06T00:08:18.639Z] Copying: 462/1024 [MB] (245 MBps) [2024-12-06T00:08:19.580Z] Copying: 711/1024 [MB] (249 MBps) [2024-12-06T00:08:19.891Z] Copying: 956/1024 [MB] (245 MBps) [2024-12-06T00:08:20.460Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:30:47.751 00:30:47.751 Calculate MD5 checksum, iteration 1 00:30:47.751 00:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:47.751 00:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:47.751 00:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:47.751 00:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:47.751 00:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:47.751 00:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:47.751 00:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:47.751 00:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:47.751 [2024-12-06 00:08:20.414776] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:47.751 [2024-12-06 00:08:20.415069] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83810 ] 00:30:48.012 [2024-12-06 00:08:20.570168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.012 [2024-12-06 00:08:20.646529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.394  [2024-12-06T00:08:23.047Z] Copying: 652/1024 [MB] (652 MBps) [2024-12-06T00:08:23.620Z] Copying: 1024/1024 [MB] (average 570 MBps) 00:30:50.911 00:30:50.911 00:08:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:50.911 00:08:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:53.461 Fill FTL, iteration 2 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=60bd3beee1a84a240d2dac5987724a65 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:53.461 00:08:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:53.461 [2024-12-06 00:08:25.698002] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:53.461 [2024-12-06 00:08:25.698116] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83868 ] 00:30:53.461 [2024-12-06 00:08:25.859150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.461 [2024-12-06 00:08:25.964841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.842  [2024-12-06T00:08:28.484Z] Copying: 192/1024 [MB] (192 MBps) [2024-12-06T00:08:29.418Z] Copying: 424/1024 [MB] (232 MBps) [2024-12-06T00:08:30.353Z] Copying: 665/1024 [MB] (241 MBps) [2024-12-06T00:08:30.919Z] Copying: 902/1024 [MB] (237 MBps) [2024-12-06T00:08:31.486Z] Copying: 1024/1024 [MB] (average 226 MBps) 00:30:58.777 00:30:59.037 Calculate MD5 checksum, iteration 2 00:30:59.037 00:08:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:59.037 00:08:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:59.037 00:08:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:59.037 00:08:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:59.037 00:08:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:59.037 00:08:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:59.037 00:08:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:59.037 00:08:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:59.037 [2024-12-06 00:08:31.558584] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:30:59.037 [2024-12-06 00:08:31.558703] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83934 ] 00:30:59.037 [2024-12-06 00:08:31.714601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.296 [2024-12-06 00:08:31.800417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.670  [2024-12-06T00:08:33.945Z] Copying: 650/1024 [MB] (650 MBps) [2024-12-06T00:08:34.884Z] Copying: 1024/1024 [MB] (average 632 MBps) 00:31:02.175 00:31:02.437 00:08:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:02.437 00:08:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:04.345 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:04.345 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=80b6efe987e9bb25b17c14ad4d41415a 00:31:04.345 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:04.345 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:04.345 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:04.605 [2024-12-06 00:08:37.194827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.605 [2024-12-06 00:08:37.194868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:04.605 [2024-12-06 00:08:37.194879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:04.605 [2024-12-06 00:08:37.194886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.605 [2024-12-06 00:08:37.194905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.605 [2024-12-06 00:08:37.194914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:04.605 [2024-12-06 00:08:37.194921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:04.605 [2024-12-06 00:08:37.194926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.605 [2024-12-06 00:08:37.194942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.605 [2024-12-06 00:08:37.194948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:04.605 [2024-12-06 00:08:37.194954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:04.605 [2024-12-06 00:08:37.194960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.605 [2024-12-06 00:08:37.195022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.184 ms, result 0 00:31:04.605 true 00:31:04.605 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:04.865 { 00:31:04.865 "name": "ftl", 00:31:04.865 "properties": [ 00:31:04.865 { 00:31:04.865 "name": "superblock_version", 00:31:04.865 "value": 5, 00:31:04.865 "read-only": true 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "name": "base_device", 00:31:04.865 "bands": [ 00:31:04.865 { 00:31:04.865 "id": 0, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 1, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 2, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 3, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 4, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 5, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 6, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 7, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 8, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 9, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 10, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 11, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 12, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 13, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 14, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 15, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 16, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 17, 00:31:04.865 "state": "FREE", 00:31:04.865 "validity": 0.0 00:31:04.865 } 00:31:04.865 ], 00:31:04.865 "read-only": true 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "name": "cache_device", 00:31:04.865 "type": "bdev", 00:31:04.865 "chunks": [ 00:31:04.865 { 00:31:04.865 "id": 0, 00:31:04.865 "state": "INACTIVE", 00:31:04.865 "utilization": 0.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 1, 00:31:04.865 "state": "CLOSED", 00:31:04.865 "utilization": 1.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 2, 00:31:04.865 "state": "CLOSED", 00:31:04.865 "utilization": 1.0 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 3, 00:31:04.865 "state": "OPEN", 00:31:04.865 "utilization": 0.001953125 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "id": 4, 00:31:04.865 "state": "OPEN", 00:31:04.865 "utilization": 0.0 00:31:04.865 } 00:31:04.865 ], 00:31:04.865 "read-only": true 00:31:04.865 }, 00:31:04.865 { 00:31:04.865 "name": "verbose_mode", 00:31:04.865 "value": true, 00:31:04.866 "unit": "", 00:31:04.866 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:04.866 }, 00:31:04.866 { 00:31:04.866 "name": "prep_upgrade_on_shutdown", 00:31:04.866 "value": false, 00:31:04.866 "unit": "", 00:31:04.866 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:04.866 } 00:31:04.866 ] 00:31:04.866 } 00:31:04.866 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:04.866 [2024-12-06 00:08:37.551104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.866 [2024-12-06 00:08:37.551139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:04.866 [2024-12-06 00:08:37.551148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:04.866 [2024-12-06 00:08:37.551153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.866 [2024-12-06 00:08:37.551169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.866 [2024-12-06 00:08:37.551175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:04.866 [2024-12-06 00:08:37.551181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:04.866 [2024-12-06 00:08:37.551187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.866 [2024-12-06 00:08:37.551201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:04.866 [2024-12-06 00:08:37.551207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:04.866 [2024-12-06 00:08:37.551213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:04.866 [2024-12-06 00:08:37.551218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:04.866 [2024-12-06 00:08:37.551259] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.147 ms, result 0 00:31:04.866 true 00:31:05.124 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:05.124 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:05.124 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:05.124 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:05.124 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:05.124 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:05.383 [2024-12-06 00:08:37.919404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.383 [2024-12-06 00:08:37.919523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:05.383 [2024-12-06 00:08:37.919569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:05.383 [2024-12-06 00:08:37.919587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.383 [2024-12-06 00:08:37.919633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.383 [2024-12-06 00:08:37.919651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:05.383 [2024-12-06 00:08:37.919666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:05.383 [2024-12-06 00:08:37.919680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.383 [2024-12-06 00:08:37.919704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.383 [2024-12-06 00:08:37.919719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:05.383 [2024-12-06 00:08:37.919734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:05.383 [2024-12-06 00:08:37.919771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.383 [2024-12-06 00:08:37.919839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.425 ms, result 0 00:31:05.383 true 00:31:05.383 00:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:05.383 { 00:31:05.383 "name": "ftl", 00:31:05.383 "properties": [ 00:31:05.383 { 00:31:05.383 "name": "superblock_version", 00:31:05.383 "value": 5, 00:31:05.383 "read-only": true 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "name": "base_device", 00:31:05.383 "bands": [ 00:31:05.383 { 00:31:05.383 "id": 0, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 1, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 2, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 3, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 4, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 5, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 6, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 7, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 8, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 9, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 10, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 11, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 12, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 13, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 14, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 15, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 16, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "id": 17, 00:31:05.383 "state": "FREE", 00:31:05.383 "validity": 0.0 00:31:05.383 } 00:31:05.383 ], 00:31:05.383 "read-only": true 00:31:05.383 }, 00:31:05.383 { 00:31:05.383 "name": "cache_device", 00:31:05.383 "type": "bdev", 00:31:05.383 "chunks": [ 00:31:05.383 { 00:31:05.383 "id": 0, 00:31:05.384 "state": "INACTIVE", 00:31:05.384 "utilization": 0.0 00:31:05.384 }, 00:31:05.384 { 00:31:05.384 "id": 1, 00:31:05.384 "state": "CLOSED", 00:31:05.384 "utilization": 1.0 00:31:05.384 }, 00:31:05.384 { 00:31:05.384 "id": 2, 00:31:05.384 "state": "CLOSED", 00:31:05.384 "utilization": 1.0 00:31:05.384 }, 00:31:05.384 { 00:31:05.384 "id": 3, 00:31:05.384 "state": "OPEN", 00:31:05.384 "utilization": 0.001953125 00:31:05.384 }, 00:31:05.384 { 00:31:05.384 "id": 4, 00:31:05.384 "state": "OPEN", 00:31:05.384 "utilization": 0.0 00:31:05.384 } 00:31:05.384 ], 00:31:05.384 "read-only": true 00:31:05.384 }, 00:31:05.384 { 00:31:05.384 "name": "verbose_mode", 00:31:05.384 "value": true, 00:31:05.384 "unit": "", 00:31:05.384 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:05.384 }, 00:31:05.384 { 00:31:05.384 "name": "prep_upgrade_on_shutdown", 00:31:05.384 "value": true, 00:31:05.384 "unit": "", 00:31:05.384 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:05.384 } 00:31:05.384 ] 00:31:05.384 } 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83582 ]] 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83582 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83582 ']' 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83582 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83582 00:31:05.643 killing process with pid 83582 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83582' 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83582 00:31:05.643 00:08:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83582 00:31:06.214 [2024-12-06 00:08:38.653425] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:06.214 [2024-12-06 00:08:38.665254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.214 [2024-12-06 00:08:38.665287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:06.214 [2024-12-06 00:08:38.665297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:06.214 [2024-12-06 00:08:38.665304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.214 [2024-12-06 00:08:38.665322] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:06.214 [2024-12-06 00:08:38.667394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.214 [2024-12-06 00:08:38.667419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:06.214 [2024-12-06 00:08:38.667428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.062 ms 00:31:06.214 [2024-12-06 00:08:38.667438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.222 [2024-12-06 00:08:47.517934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.222 [2024-12-06 00:08:47.518093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:16.222 [2024-12-06 00:08:47.518115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8850.449 ms 00:31:16.222 [2024-12-06 00:08:47.518122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.222 [2024-12-06 00:08:47.519179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.222 [2024-12-06 00:08:47.519193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:16.222 [2024-12-06 00:08:47.519200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.043 ms 00:31:16.222 [2024-12-06 00:08:47.519206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.222 [2024-12-06 00:08:47.520165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.222 [2024-12-06 00:08:47.520184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:16.222 [2024-12-06 00:08:47.520193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.854 ms 00:31:16.222 [2024-12-06 00:08:47.520204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.222 [2024-12-06 00:08:47.528084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.222 [2024-12-06 00:08:47.528111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:16.222 [2024-12-06 00:08:47.528119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.854 ms 00:31:16.222 [2024-12-06 00:08:47.528125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.222 [2024-12-06 00:08:47.533103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.222 [2024-12-06 00:08:47.533209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:16.222 [2024-12-06 00:08:47.533222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.940 ms 00:31:16.222 [2024-12-06 00:08:47.533228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.222 [2024-12-06 00:08:47.533282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.222 [2024-12-06 00:08:47.533294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:16.222 [2024-12-06 00:08:47.533301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:16.222 [2024-12-06 00:08:47.533307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.222 [2024-12-06 00:08:47.540452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.222 [2024-12-06 00:08:47.540661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:16.222 [2024-12-06 00:08:47.540672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.133 ms 00:31:16.222 [2024-12-06 00:08:47.540678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.222 [2024-12-06 00:08:47.547797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.223 [2024-12-06 00:08:47.547889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:16.223 [2024-12-06 00:08:47.547899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.097 ms 00:31:16.223 [2024-12-06 00:08:47.547905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.554893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.223 [2024-12-06 00:08:47.554997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:16.223 [2024-12-06 00:08:47.555008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.965 ms 00:31:16.223 [2024-12-06 00:08:47.555013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.561918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.223 [2024-12-06 00:08:47.561943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:16.223 [2024-12-06 00:08:47.561950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.816 ms 00:31:16.223 [2024-12-06 00:08:47.561955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.561991] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:16.223 [2024-12-06 00:08:47.562008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:16.223 [2024-12-06 00:08:47.562027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:16.223 [2024-12-06 00:08:47.562033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:16.223 [2024-12-06 00:08:47.562040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:16.223 [2024-12-06 00:08:47.562127] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:16.223 [2024-12-06 00:08:47.562133] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 895f9941-c2a4-4387-825f-4634a31cfd5e 00:31:16.223 [2024-12-06 00:08:47.562138] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:16.223 [2024-12-06 00:08:47.562143] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:16.223 [2024-12-06 00:08:47.562149] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:16.223 [2024-12-06 00:08:47.562154] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:16.223 [2024-12-06 00:08:47.562162] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:16.223 [2024-12-06 00:08:47.562167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:16.223 [2024-12-06 00:08:47.562176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:16.223 [2024-12-06 00:08:47.562181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:16.223 [2024-12-06 00:08:47.562186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:16.223 [2024-12-06 00:08:47.562192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.223 [2024-12-06 00:08:47.562197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:16.223 [2024-12-06 00:08:47.562204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:31:16.223 [2024-12-06 00:08:47.562210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.571759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.223 [2024-12-06 00:08:47.571784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:16.223 [2024-12-06 00:08:47.571796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.537 ms 00:31:16.223 [2024-12-06 00:08:47.571803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.572079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.223 [2024-12-06 00:08:47.572091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:16.223 [2024-12-06 00:08:47.572098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:31:16.223 [2024-12-06 00:08:47.572104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.604688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.223 [2024-12-06 00:08:47.604716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:16.223 [2024-12-06 00:08:47.604724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.223 [2024-12-06 00:08:47.604731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.604753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.223 [2024-12-06 00:08:47.604759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:16.223 [2024-12-06 00:08:47.604764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.223 [2024-12-06 00:08:47.604770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.604817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.223 [2024-12-06 00:08:47.604825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:16.223 [2024-12-06 00:08:47.604834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.223 [2024-12-06 00:08:47.604840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.604852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.223 [2024-12-06 00:08:47.604858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:16.223 [2024-12-06 00:08:47.604864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.223 [2024-12-06 00:08:47.604869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.663401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.223 [2024-12-06 00:08:47.663437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:16.223 [2024-12-06 00:08:47.663450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.223 [2024-12-06 00:08:47.663457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.711164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.223 [2024-12-06 00:08:47.711196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:16.223 [2024-12-06 00:08:47.711204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.223 [2024-12-06 00:08:47.711210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.711270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.223 [2024-12-06 00:08:47.711278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:16.223 [2024-12-06 00:08:47.711285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.223 [2024-12-06 00:08:47.711295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.711327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.223 [2024-12-06 00:08:47.711334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:16.223 [2024-12-06 00:08:47.711340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.223 [2024-12-06 00:08:47.711346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.223 [2024-12-06 00:08:47.711411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.224 [2024-12-06 00:08:47.711419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:16.224 [2024-12-06 00:08:47.711425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.224 [2024-12-06 00:08:47.711431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.224 [2024-12-06 00:08:47.711455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.224 [2024-12-06 00:08:47.711462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:16.224 [2024-12-06 00:08:47.711468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.224 [2024-12-06 00:08:47.711473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.224 [2024-12-06 00:08:47.711502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.224 [2024-12-06 00:08:47.711508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:16.224 [2024-12-06 00:08:47.711515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.224 [2024-12-06 00:08:47.711520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.224 [2024-12-06 00:08:47.711555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:16.224 [2024-12-06 00:08:47.711563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:16.224 [2024-12-06 00:08:47.711570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:16.224 [2024-12-06 00:08:47.711575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.224 [2024-12-06 00:08:47.711664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9046.363 ms, result 0 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84146 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84146 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84146 ']' 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:19.551 00:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:19.551 [2024-12-06 00:08:51.734746] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:31:19.551 [2024-12-06 00:08:51.734869] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84146 ] 00:31:19.551 [2024-12-06 00:08:51.890192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.551 [2024-12-06 00:08:51.972015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.122 [2024-12-06 00:08:52.542353] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:20.122 [2024-12-06 00:08:52.543175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:20.122 [2024-12-06 00:08:52.685428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.685462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:20.122 [2024-12-06 00:08:52.685472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:20.122 [2024-12-06 00:08:52.685478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.685517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.685525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:20.122 [2024-12-06 00:08:52.685531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:20.122 [2024-12-06 00:08:52.685537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.685554] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:20.122 [2024-12-06 00:08:52.686082] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:20.122 [2024-12-06 00:08:52.686095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.686101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:20.122 [2024-12-06 00:08:52.686107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:31:20.122 [2024-12-06 00:08:52.686112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.687027] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:20.122 [2024-12-06 00:08:52.696837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.696866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:20.122 [2024-12-06 00:08:52.696879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.811 ms 00:31:20.122 [2024-12-06 00:08:52.696885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.696930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.696937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:20.122 [2024-12-06 00:08:52.696944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:20.122 [2024-12-06 00:08:52.696950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.701299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.701325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:20.122 [2024-12-06 00:08:52.701333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.285 ms 00:31:20.122 [2024-12-06 00:08:52.701338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.701410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.701418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:20.122 [2024-12-06 00:08:52.701424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:31:20.122 [2024-12-06 00:08:52.701430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.701463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.701473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:20.122 [2024-12-06 00:08:52.701479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:20.122 [2024-12-06 00:08:52.701485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.701502] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:20.122 [2024-12-06 00:08:52.704125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.704251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:20.122 [2024-12-06 00:08:52.704264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.626 ms 00:31:20.122 [2024-12-06 00:08:52.704273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.704296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.704302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:20.122 [2024-12-06 00:08:52.704309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:20.122 [2024-12-06 00:08:52.704314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.704330] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:20.122 [2024-12-06 00:08:52.704348] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:20.122 [2024-12-06 00:08:52.704374] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:20.122 [2024-12-06 00:08:52.704386] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:20.122 [2024-12-06 00:08:52.704465] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:20.122 [2024-12-06 00:08:52.704473] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:20.122 [2024-12-06 00:08:52.704480] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:20.122 [2024-12-06 00:08:52.704488] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:20.122 [2024-12-06 00:08:52.704495] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:20.122 [2024-12-06 00:08:52.704503] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:20.122 [2024-12-06 00:08:52.704508] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:20.122 [2024-12-06 00:08:52.704513] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:20.122 [2024-12-06 00:08:52.704519] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:20.122 [2024-12-06 00:08:52.704524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.704530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:20.122 [2024-12-06 00:08:52.704536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:31:20.122 [2024-12-06 00:08:52.704542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.704608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.122 [2024-12-06 00:08:52.704614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:20.122 [2024-12-06 00:08:52.704622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:31:20.122 [2024-12-06 00:08:52.704627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.122 [2024-12-06 00:08:52.704703] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:20.122 [2024-12-06 00:08:52.704710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:20.122 [2024-12-06 00:08:52.704716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:20.122 [2024-12-06 00:08:52.704722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:20.122 [2024-12-06 00:08:52.704728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:20.122 [2024-12-06 00:08:52.704733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:20.122 [2024-12-06 00:08:52.704738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:20.122 [2024-12-06 00:08:52.704743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:20.122 [2024-12-06 00:08:52.704749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:20.122 [2024-12-06 00:08:52.704754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:20.122 [2024-12-06 00:08:52.704759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:20.122 [2024-12-06 00:08:52.704765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:20.122 [2024-12-06 00:08:52.704770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:20.122 [2024-12-06 00:08:52.704775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:20.122 [2024-12-06 00:08:52.704781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:20.122 [2024-12-06 00:08:52.704786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:20.122 [2024-12-06 00:08:52.704791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:20.122 [2024-12-06 00:08:52.704796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:20.122 [2024-12-06 00:08:52.704801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:20.122 [2024-12-06 00:08:52.704806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:20.122 [2024-12-06 00:08:52.704811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:20.122 [2024-12-06 00:08:52.704816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:20.122 [2024-12-06 00:08:52.704821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:20.122 [2024-12-06 00:08:52.704830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:20.122 [2024-12-06 00:08:52.704835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:20.122 [2024-12-06 00:08:52.704839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:20.122 [2024-12-06 00:08:52.704844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:20.123 [2024-12-06 00:08:52.704849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:20.123 [2024-12-06 00:08:52.704854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:20.123 [2024-12-06 00:08:52.704859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:20.123 [2024-12-06 00:08:52.704863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:20.123 [2024-12-06 00:08:52.704869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:20.123 [2024-12-06 00:08:52.704873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:20.123 [2024-12-06 00:08:52.704878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:20.123 [2024-12-06 00:08:52.704883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:20.123 [2024-12-06 00:08:52.704888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:20.123 [2024-12-06 00:08:52.704892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:20.123 [2024-12-06 00:08:52.704897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:20.123 [2024-12-06 00:08:52.704902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:20.123 [2024-12-06 00:08:52.704907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:20.123 [2024-12-06 00:08:52.704912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:20.123 [2024-12-06 00:08:52.704916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:20.123 [2024-12-06 00:08:52.704921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:20.123 [2024-12-06 00:08:52.704926] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:20.123 [2024-12-06 00:08:52.704932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:20.123 [2024-12-06 00:08:52.704938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:20.123 [2024-12-06 00:08:52.704944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:20.123 [2024-12-06 00:08:52.704952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:20.123 [2024-12-06 00:08:52.704957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:20.123 [2024-12-06 00:08:52.704962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:20.123 [2024-12-06 00:08:52.704983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:20.123 [2024-12-06 00:08:52.704989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:20.123 [2024-12-06 00:08:52.704994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:20.123 [2024-12-06 00:08:52.705000] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:20.123 [2024-12-06 00:08:52.705007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:20.123 [2024-12-06 00:08:52.705019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:20.123 [2024-12-06 00:08:52.705035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:20.123 [2024-12-06 00:08:52.705041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:20.123 [2024-12-06 00:08:52.705046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:20.123 [2024-12-06 00:08:52.705051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:20.123 [2024-12-06 00:08:52.705089] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:20.123 [2024-12-06 00:08:52.705095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:20.123 [2024-12-06 00:08:52.705107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:20.123 [2024-12-06 00:08:52.705112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:20.123 [2024-12-06 00:08:52.705117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:20.123 [2024-12-06 00:08:52.705124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:20.123 [2024-12-06 00:08:52.705130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:20.123 [2024-12-06 00:08:52.705137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.475 ms 00:31:20.123 [2024-12-06 00:08:52.705143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:20.123 [2024-12-06 00:08:52.705175] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:20.123 [2024-12-06 00:08:52.705183] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:24.328 [2024-12-06 00:08:56.296001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.296301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:24.328 [2024-12-06 00:08:56.296330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3590.810 ms 00:31:24.328 [2024-12-06 00:08:56.296340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.327144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.327338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:24.328 [2024-12-06 00:08:56.327359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.565 ms 00:31:24.328 [2024-12-06 00:08:56.327368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.327466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.327485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:24.328 [2024-12-06 00:08:56.327496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:24.328 [2024-12-06 00:08:56.327505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.362577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.362761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:24.328 [2024-12-06 00:08:56.362786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.032 ms 00:31:24.328 [2024-12-06 00:08:56.362795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.362831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.362840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:24.328 [2024-12-06 00:08:56.362849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:24.328 [2024-12-06 00:08:56.362857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.363437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.363460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:24.328 [2024-12-06 00:08:56.363471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:31:24.328 [2024-12-06 00:08:56.363480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.363531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.363540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:24.328 [2024-12-06 00:08:56.363549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:24.328 [2024-12-06 00:08:56.363557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.380990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.381032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:24.328 [2024-12-06 00:08:56.381043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.408 ms 00:31:24.328 [2024-12-06 00:08:56.381052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.413557] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:24.328 [2024-12-06 00:08:56.413613] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:24.328 [2024-12-06 00:08:56.413628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.413638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:24.328 [2024-12-06 00:08:56.413649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.464 ms 00:31:24.328 [2024-12-06 00:08:56.413656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.428805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.428853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:24.328 [2024-12-06 00:08:56.428865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.094 ms 00:31:24.328 [2024-12-06 00:08:56.428874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.441216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.441260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:24.328 [2024-12-06 00:08:56.441273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.290 ms 00:31:24.328 [2024-12-06 00:08:56.441280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.453692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.453735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:24.328 [2024-12-06 00:08:56.453746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.366 ms 00:31:24.328 [2024-12-06 00:08:56.453754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.328 [2024-12-06 00:08:56.454418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.328 [2024-12-06 00:08:56.454450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:24.329 [2024-12-06 00:08:56.454460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:31:24.329 [2024-12-06 00:08:56.454468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.519108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.519172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:24.329 [2024-12-06 00:08:56.519187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 64.618 ms 00:31:24.329 [2024-12-06 00:08:56.519196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.530458] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:24.329 [2024-12-06 00:08:56.531526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.531567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:24.329 [2024-12-06 00:08:56.531579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.264 ms 00:31:24.329 [2024-12-06 00:08:56.531588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.531692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.531707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:24.329 [2024-12-06 00:08:56.531718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:24.329 [2024-12-06 00:08:56.531727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.531789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.531801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:24.329 [2024-12-06 00:08:56.531810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:24.329 [2024-12-06 00:08:56.531819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.531842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.531851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:24.329 [2024-12-06 00:08:56.531863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:24.329 [2024-12-06 00:08:56.531872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.531908] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:24.329 [2024-12-06 00:08:56.531919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.531928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:24.329 [2024-12-06 00:08:56.531937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:24.329 [2024-12-06 00:08:56.531946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.557147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.557197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:24.329 [2024-12-06 00:08:56.557211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.180 ms 00:31:24.329 [2024-12-06 00:08:56.557220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.557306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.557316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:24.329 [2024-12-06 00:08:56.557326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:24.329 [2024-12-06 00:08:56.557334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.560186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3874.180 ms, result 0 00:31:24.329 [2024-12-06 00:08:56.573554] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.329 [2024-12-06 00:08:56.589538] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:24.329 [2024-12-06 00:08:56.597718] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:24.329 00:08:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.329 00:08:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:24.329 00:08:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:24.329 00:08:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:24.329 00:08:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:24.329 [2024-12-06 00:08:56.922009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.922061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:24.329 [2024-12-06 00:08:56.922081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:24.329 [2024-12-06 00:08:56.922090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.922117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.922127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:24.329 [2024-12-06 00:08:56.922136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:24.329 [2024-12-06 00:08:56.922144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.922165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:24.329 [2024-12-06 00:08:56.922175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:24.329 [2024-12-06 00:08:56.922184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:24.329 [2024-12-06 00:08:56.922191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:24.329 [2024-12-06 00:08:56.922257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.258 ms, result 0 00:31:24.329 true 00:31:24.329 00:08:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:24.593 { 00:31:24.593 "name": "ftl", 00:31:24.593 "properties": [ 00:31:24.593 { 00:31:24.593 "name": "superblock_version", 00:31:24.593 "value": 5, 00:31:24.593 "read-only": true 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "name": "base_device", 00:31:24.593 "bands": [ 00:31:24.593 { 00:31:24.593 "id": 0, 00:31:24.593 "state": "CLOSED", 00:31:24.593 "validity": 1.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 1, 00:31:24.593 "state": "CLOSED", 00:31:24.593 "validity": 1.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 2, 00:31:24.593 "state": "CLOSED", 00:31:24.593 "validity": 0.007843137254901933 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 3, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 4, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 5, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 6, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 7, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 8, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 9, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 10, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 11, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 12, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 13, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 14, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 15, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 16, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 17, 00:31:24.593 "state": "FREE", 00:31:24.593 "validity": 0.0 00:31:24.593 } 00:31:24.593 ], 00:31:24.593 "read-only": true 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "name": "cache_device", 00:31:24.593 "type": "bdev", 00:31:24.593 "chunks": [ 00:31:24.593 { 00:31:24.593 "id": 0, 00:31:24.593 "state": "INACTIVE", 00:31:24.593 "utilization": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 1, 00:31:24.593 "state": "OPEN", 00:31:24.593 "utilization": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 2, 00:31:24.593 "state": "OPEN", 00:31:24.593 "utilization": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.593 "id": 3, 00:31:24.593 "state": "FREE", 00:31:24.593 "utilization": 0.0 00:31:24.593 }, 00:31:24.593 { 00:31:24.594 "id": 4, 00:31:24.594 "state": "FREE", 00:31:24.594 "utilization": 0.0 00:31:24.594 } 00:31:24.594 ], 00:31:24.594 "read-only": true 00:31:24.594 }, 00:31:24.594 { 00:31:24.594 "name": "verbose_mode", 00:31:24.594 "value": true, 00:31:24.594 "unit": "", 00:31:24.594 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:24.594 }, 00:31:24.594 { 00:31:24.594 "name": "prep_upgrade_on_shutdown", 00:31:24.594 "value": false, 00:31:24.594 "unit": "", 00:31:24.594 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:24.594 } 00:31:24.594 ] 00:31:24.594 } 00:31:24.594 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:24.594 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:24.594 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:24.856 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:24.856 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:24.856 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:24.856 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:24.856 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:25.118 Validate MD5 checksum, iteration 1 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:25.118 00:08:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:25.118 [2024-12-06 00:08:57.669900] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:31:25.118 [2024-12-06 00:08:57.670057] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84228 ] 00:31:25.395 [2024-12-06 00:08:57.835545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.395 [2024-12-06 00:08:57.981121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.312  [2024-12-06T00:09:00.593Z] Copying: 533/1024 [MB] (533 MBps) [2024-12-06T00:09:01.530Z] Copying: 1024/1024 [MB] (average 548 MBps) 00:31:28.821 00:31:28.821 00:09:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:28.821 00:09:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=60bd3beee1a84a240d2dac5987724a65 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 60bd3beee1a84a240d2dac5987724a65 != \6\0\b\d\3\b\e\e\e\1\a\8\4\a\2\4\0\d\2\d\a\c\5\9\8\7\7\2\4\a\6\5 ]] 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:31.369 Validate MD5 checksum, iteration 2 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:31.369 00:09:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:31.369 [2024-12-06 00:09:03.740416] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:31:31.369 [2024-12-06 00:09:03.740523] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84295 ] 00:31:31.369 [2024-12-06 00:09:03.894816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.369 [2024-12-06 00:09:03.983473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.744  [2024-12-06T00:09:06.020Z] Copying: 691/1024 [MB] (691 MBps) [2024-12-06T00:09:06.957Z] Copying: 1024/1024 [MB] (average 677 MBps) 00:31:34.248 00:31:34.248 00:09:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:34.248 00:09:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=80b6efe987e9bb25b17c14ad4d41415a 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 80b6efe987e9bb25b17c14ad4d41415a != \8\0\b\6\e\f\e\9\8\7\e\9\b\b\2\5\b\1\7\c\1\4\a\d\4\d\4\1\4\1\5\a ]] 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84146 ]] 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84146 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84352 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84352 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84352 ']' 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.163 00:09:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:36.163 [2024-12-06 00:09:08.830721] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:31:36.163 [2024-12-06 00:09:08.830860] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84352 ] 00:31:36.424 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84146 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:36.424 [2024-12-06 00:09:08.986271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.424 [2024-12-06 00:09:09.064418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.997 [2024-12-06 00:09:09.638327] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:36.997 [2024-12-06 00:09:09.638381] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:37.261 [2024-12-06 00:09:09.781161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.781194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:37.261 [2024-12-06 00:09:09.781205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:37.261 [2024-12-06 00:09:09.781211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.781251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.781259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:37.261 [2024-12-06 00:09:09.781265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:31:37.261 [2024-12-06 00:09:09.781271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.781288] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:37.261 [2024-12-06 00:09:09.781825] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:37.261 [2024-12-06 00:09:09.781841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.781848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:37.261 [2024-12-06 00:09:09.781855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.560 ms 00:31:37.261 [2024-12-06 00:09:09.781860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.782094] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:37.261 [2024-12-06 00:09:09.794554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.794581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:37.261 [2024-12-06 00:09:09.794590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.459 ms 00:31:37.261 [2024-12-06 00:09:09.794598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.801417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.801442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:37.261 [2024-12-06 00:09:09.801450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:37.261 [2024-12-06 00:09:09.801456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.801694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.801707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:37.261 [2024-12-06 00:09:09.801713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:31:37.261 [2024-12-06 00:09:09.801719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.801759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.801766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:37.261 [2024-12-06 00:09:09.801772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:31:37.261 [2024-12-06 00:09:09.801778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.801795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.801801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:37.261 [2024-12-06 00:09:09.801807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:37.261 [2024-12-06 00:09:09.801812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.801827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:37.261 [2024-12-06 00:09:09.804026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.804046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:37.261 [2024-12-06 00:09:09.804053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.202 ms 00:31:37.261 [2024-12-06 00:09:09.804058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.804081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.804088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:37.261 [2024-12-06 00:09:09.804094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:37.261 [2024-12-06 00:09:09.804100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.804115] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:37.261 [2024-12-06 00:09:09.804130] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:37.261 [2024-12-06 00:09:09.804164] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:37.261 [2024-12-06 00:09:09.804176] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:37.261 [2024-12-06 00:09:09.804256] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:37.261 [2024-12-06 00:09:09.804267] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:37.261 [2024-12-06 00:09:09.804275] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:37.261 [2024-12-06 00:09:09.804283] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:37.261 [2024-12-06 00:09:09.804290] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:37.261 [2024-12-06 00:09:09.804296] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:37.261 [2024-12-06 00:09:09.804301] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:37.261 [2024-12-06 00:09:09.804306] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:37.261 [2024-12-06 00:09:09.804312] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:37.261 [2024-12-06 00:09:09.804319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.804325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:37.261 [2024-12-06 00:09:09.804332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:31:37.261 [2024-12-06 00:09:09.804337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.804403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.261 [2024-12-06 00:09:09.804409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:37.261 [2024-12-06 00:09:09.804414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:31:37.261 [2024-12-06 00:09:09.804419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.261 [2024-12-06 00:09:09.804494] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:37.261 [2024-12-06 00:09:09.804503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:37.261 [2024-12-06 00:09:09.804509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:37.261 [2024-12-06 00:09:09.804515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:37.261 [2024-12-06 00:09:09.804521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:37.261 [2024-12-06 00:09:09.804526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:37.261 [2024-12-06 00:09:09.804531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:37.261 [2024-12-06 00:09:09.804536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:37.261 [2024-12-06 00:09:09.804542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:37.261 [2024-12-06 00:09:09.804547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:37.261 [2024-12-06 00:09:09.804553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:37.261 [2024-12-06 00:09:09.804558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:37.261 [2024-12-06 00:09:09.804563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:37.261 [2024-12-06 00:09:09.804568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:37.261 [2024-12-06 00:09:09.804574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:37.261 [2024-12-06 00:09:09.804579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:37.261 [2024-12-06 00:09:09.804584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:37.261 [2024-12-06 00:09:09.804589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:37.261 [2024-12-06 00:09:09.804594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:37.261 [2024-12-06 00:09:09.804600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:37.262 [2024-12-06 00:09:09.804605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:37.262 [2024-12-06 00:09:09.804614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:37.262 [2024-12-06 00:09:09.804619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:37.262 [2024-12-06 00:09:09.804624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:37.262 [2024-12-06 00:09:09.804629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:37.262 [2024-12-06 00:09:09.804634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:37.262 [2024-12-06 00:09:09.804639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:37.262 [2024-12-06 00:09:09.804643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:37.262 [2024-12-06 00:09:09.804648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:37.262 [2024-12-06 00:09:09.804653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:37.262 [2024-12-06 00:09:09.804658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:37.262 [2024-12-06 00:09:09.804663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:37.262 [2024-12-06 00:09:09.804667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:37.262 [2024-12-06 00:09:09.804672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:37.262 [2024-12-06 00:09:09.804677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:37.262 [2024-12-06 00:09:09.804682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:37.262 [2024-12-06 00:09:09.804687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:37.262 [2024-12-06 00:09:09.804691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:37.262 [2024-12-06 00:09:09.804696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:37.262 [2024-12-06 00:09:09.804701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:37.262 [2024-12-06 00:09:09.804706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:37.262 [2024-12-06 00:09:09.804711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:37.262 [2024-12-06 00:09:09.804716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:37.262 [2024-12-06 00:09:09.804721] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:37.262 [2024-12-06 00:09:09.804727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:37.262 [2024-12-06 00:09:09.804732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:37.262 [2024-12-06 00:09:09.804738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:37.262 [2024-12-06 00:09:09.804744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:37.262 [2024-12-06 00:09:09.804749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:37.262 [2024-12-06 00:09:09.804754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:37.262 [2024-12-06 00:09:09.804759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:37.262 [2024-12-06 00:09:09.804764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:37.262 [2024-12-06 00:09:09.804769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:37.262 [2024-12-06 00:09:09.804775] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:37.262 [2024-12-06 00:09:09.804782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:37.262 [2024-12-06 00:09:09.804794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:37.262 [2024-12-06 00:09:09.804809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:37.262 [2024-12-06 00:09:09.804815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:37.262 [2024-12-06 00:09:09.804820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:37.262 [2024-12-06 00:09:09.804825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:37.262 [2024-12-06 00:09:09.804862] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:37.262 [2024-12-06 00:09:09.804868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:37.262 [2024-12-06 00:09:09.804881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:37.262 [2024-12-06 00:09:09.804886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:37.262 [2024-12-06 00:09:09.804892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:37.262 [2024-12-06 00:09:09.804898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.804904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:37.262 [2024-12-06 00:09:09.804909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.457 ms 00:31:37.262 [2024-12-06 00:09:09.804915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.824382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.824404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:37.262 [2024-12-06 00:09:09.824412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.432 ms 00:31:37.262 [2024-12-06 00:09:09.824418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.824447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.824453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:37.262 [2024-12-06 00:09:09.824460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:37.262 [2024-12-06 00:09:09.824466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.848312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.848335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:37.262 [2024-12-06 00:09:09.848342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.806 ms 00:31:37.262 [2024-12-06 00:09:09.848348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.848367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.848374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:37.262 [2024-12-06 00:09:09.848380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:37.262 [2024-12-06 00:09:09.848388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.848455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.848463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:37.262 [2024-12-06 00:09:09.848470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:37.262 [2024-12-06 00:09:09.848475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.848505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.848511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:37.262 [2024-12-06 00:09:09.848517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:37.262 [2024-12-06 00:09:09.848522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.859931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.859956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:37.262 [2024-12-06 00:09:09.859963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.389 ms 00:31:37.262 [2024-12-06 00:09:09.859980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.860053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.860061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:37.262 [2024-12-06 00:09:09.860068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:37.262 [2024-12-06 00:09:09.860074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.890100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.890129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:37.262 [2024-12-06 00:09:09.890139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.012 ms 00:31:37.262 [2024-12-06 00:09:09.890146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.897354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.897376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:37.262 [2024-12-06 00:09:09.897390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.401 ms 00:31:37.262 [2024-12-06 00:09:09.897396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.262 [2024-12-06 00:09:09.941224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.262 [2024-12-06 00:09:09.941258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:37.262 [2024-12-06 00:09:09.941267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.786 ms 00:31:37.262 [2024-12-06 00:09:09.941273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.263 [2024-12-06 00:09:09.941372] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:37.263 [2024-12-06 00:09:09.941446] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:37.263 [2024-12-06 00:09:09.941517] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:37.263 [2024-12-06 00:09:09.941591] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:37.263 [2024-12-06 00:09:09.941597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.263 [2024-12-06 00:09:09.941604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:37.263 [2024-12-06 00:09:09.941610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.291 ms 00:31:37.263 [2024-12-06 00:09:09.941616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.263 [2024-12-06 00:09:09.941660] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:37.263 [2024-12-06 00:09:09.941680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.263 [2024-12-06 00:09:09.941690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:37.263 [2024-12-06 00:09:09.941696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:37.263 [2024-12-06 00:09:09.941702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.263 [2024-12-06 00:09:09.952883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.263 [2024-12-06 00:09:09.952912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:37.263 [2024-12-06 00:09:09.952920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.164 ms 00:31:37.263 [2024-12-06 00:09:09.952927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.263 [2024-12-06 00:09:09.959234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.263 [2024-12-06 00:09:09.959256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:37.263 [2024-12-06 00:09:09.959264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:37.263 [2024-12-06 00:09:09.959270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.263 [2024-12-06 00:09:09.959333] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:37.263 [2024-12-06 00:09:09.959442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.263 [2024-12-06 00:09:09.959450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:37.263 [2024-12-06 00:09:09.959457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.110 ms 00:31:37.263 [2024-12-06 00:09:09.959463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.207 [2024-12-06 00:09:10.649096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.207 [2024-12-06 00:09:10.649142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:38.207 [2024-12-06 00:09:10.649154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 688.978 ms 00:31:38.207 [2024-12-06 00:09:10.649161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.207 [2024-12-06 00:09:10.652686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.207 [2024-12-06 00:09:10.652713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:38.207 [2024-12-06 00:09:10.652722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.981 ms 00:31:38.207 [2024-12-06 00:09:10.652728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.207 [2024-12-06 00:09:10.653335] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:38.207 [2024-12-06 00:09:10.653361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.207 [2024-12-06 00:09:10.653367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:38.207 [2024-12-06 00:09:10.653375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.606 ms 00:31:38.207 [2024-12-06 00:09:10.653381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.207 [2024-12-06 00:09:10.653406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.207 [2024-12-06 00:09:10.653414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:38.207 [2024-12-06 00:09:10.653421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:38.207 [2024-12-06 00:09:10.653430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.207 [2024-12-06 00:09:10.653456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 694.121 ms, result 0 00:31:38.207 [2024-12-06 00:09:10.653485] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:38.207 [2024-12-06 00:09:10.653574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.207 [2024-12-06 00:09:10.653582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:38.207 [2024-12-06 00:09:10.653588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.090 ms 00:31:38.207 [2024-12-06 00:09:10.653595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.185200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.185244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:38.780 [2024-12-06 00:09:11.185262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 530.841 ms 00:31:38.780 [2024-12-06 00:09:11.185269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.188694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.188722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:38.780 [2024-12-06 00:09:11.188730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.020 ms 00:31:38.780 [2024-12-06 00:09:11.188738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.189091] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:38.780 [2024-12-06 00:09:11.189113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.189119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:38.780 [2024-12-06 00:09:11.189126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:31:38.780 [2024-12-06 00:09:11.189132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.189154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.189160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:38.780 [2024-12-06 00:09:11.189166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:38.780 [2024-12-06 00:09:11.189172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.189199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 535.710 ms, result 0 00:31:38.780 [2024-12-06 00:09:11.189231] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:38.780 [2024-12-06 00:09:11.189238] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:38.780 [2024-12-06 00:09:11.189246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.189252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:38.780 [2024-12-06 00:09:11.189258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1229.926 ms 00:31:38.780 [2024-12-06 00:09:11.189264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.189286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.189296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:38.780 [2024-12-06 00:09:11.189302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:38.780 [2024-12-06 00:09:11.189308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.197888] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:38.780 [2024-12-06 00:09:11.197978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.197987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:38.780 [2024-12-06 00:09:11.197994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.659 ms 00:31:38.780 [2024-12-06 00:09:11.198001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.198520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.198536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:38.780 [2024-12-06 00:09:11.198545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.470 ms 00:31:38.780 [2024-12-06 00:09:11.198550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.200238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.200253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:38.780 [2024-12-06 00:09:11.200261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.675 ms 00:31:38.780 [2024-12-06 00:09:11.200267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.200297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.200303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:38.780 [2024-12-06 00:09:11.200309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:38.780 [2024-12-06 00:09:11.200318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.200393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.200400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:38.780 [2024-12-06 00:09:11.200406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:38.780 [2024-12-06 00:09:11.200412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.200427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.200433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:38.780 [2024-12-06 00:09:11.200439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:38.780 [2024-12-06 00:09:11.200444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.200470] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:38.780 [2024-12-06 00:09:11.200477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.200483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:38.780 [2024-12-06 00:09:11.200489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:38.780 [2024-12-06 00:09:11.200495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.200531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.780 [2024-12-06 00:09:11.200538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:38.780 [2024-12-06 00:09:11.200544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:38.780 [2024-12-06 00:09:11.200549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.780 [2024-12-06 00:09:11.201232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1419.725 ms, result 0 00:31:38.780 [2024-12-06 00:09:11.214218] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.780 [2024-12-06 00:09:11.230218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:38.780 [2024-12-06 00:09:11.238314] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:38.780 Validate MD5 checksum, iteration 1 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:38.780 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:38.781 00:09:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:38.781 [2024-12-06 00:09:11.324002] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:31:38.781 [2024-12-06 00:09:11.324083] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84381 ] 00:31:38.781 [2024-12-06 00:09:11.474030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.040 [2024-12-06 00:09:11.562303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.416  [2024-12-06T00:09:13.694Z] Copying: 677/1024 [MB] (677 MBps) [2024-12-06T00:09:17.899Z] Copying: 1024/1024 [MB] (average 659 MBps) 00:31:45.190 00:31:45.190 00:09:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:45.190 00:09:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=60bd3beee1a84a240d2dac5987724a65 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 60bd3beee1a84a240d2dac5987724a65 != \6\0\b\d\3\b\e\e\e\1\a\8\4\a\2\4\0\d\2\d\a\c\5\9\8\7\7\2\4\a\6\5 ]] 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:47.102 Validate MD5 checksum, iteration 2 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:47.102 00:09:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:47.102 [2024-12-06 00:09:19.653383] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:31:47.102 [2024-12-06 00:09:19.653493] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84470 ] 00:31:47.363 [2024-12-06 00:09:19.812130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.363 [2024-12-06 00:09:19.918104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.748  [2024-12-06T00:09:22.392Z] Copying: 562/1024 [MB] (562 MBps) [2024-12-06T00:09:24.303Z] Copying: 1024/1024 [MB] (average 574 MBps) 00:31:51.594 00:31:51.594 00:09:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:51.594 00:09:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:53.507 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:53.507 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=80b6efe987e9bb25b17c14ad4d41415a 00:31:53.507 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 80b6efe987e9bb25b17c14ad4d41415a != \8\0\b\6\e\f\e\9\8\7\e\9\b\b\2\5\b\1\7\c\1\4\a\d\4\d\4\1\4\1\5\a ]] 00:31:53.507 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:53.507 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:53.507 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:53.507 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:53.507 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:53.507 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84352 ]] 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84352 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84352 ']' 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84352 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84352 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:53.769 killing process with pid 84352 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84352' 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84352 00:31:53.769 00:09:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84352 00:31:54.711 [2024-12-06 00:09:27.197827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:54.711 [2024-12-06 00:09:27.215531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.711 [2024-12-06 00:09:27.215591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:54.711 [2024-12-06 00:09:27.215608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:54.711 [2024-12-06 00:09:27.215618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.711 [2024-12-06 00:09:27.215644] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:54.711 [2024-12-06 00:09:27.219066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.711 [2024-12-06 00:09:27.219120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:54.711 [2024-12-06 00:09:27.219133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.403 ms 00:31:54.711 [2024-12-06 00:09:27.219142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.711 [2024-12-06 00:09:27.219418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.711 [2024-12-06 00:09:27.219431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:54.711 [2024-12-06 00:09:27.219443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.243 ms 00:31:54.711 [2024-12-06 00:09:27.219452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.711 [2024-12-06 00:09:27.221408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.221452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:54.712 [2024-12-06 00:09:27.221466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.935 ms 00:31:54.712 [2024-12-06 00:09:27.221483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.222640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.222666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:54.712 [2024-12-06 00:09:27.222678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.112 ms 00:31:54.712 [2024-12-06 00:09:27.222689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.234184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.234231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:54.712 [2024-12-06 00:09:27.234252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.451 ms 00:31:54.712 [2024-12-06 00:09:27.234261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.240654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.240702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:54.712 [2024-12-06 00:09:27.240715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.338 ms 00:31:54.712 [2024-12-06 00:09:27.240725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.240822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.240834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:54.712 [2024-12-06 00:09:27.240846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:31:54.712 [2024-12-06 00:09:27.240864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.251798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.251841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:54.712 [2024-12-06 00:09:27.251854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.913 ms 00:31:54.712 [2024-12-06 00:09:27.251863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.261767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.261808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:54.712 [2024-12-06 00:09:27.261819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.853 ms 00:31:54.712 [2024-12-06 00:09:27.261825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.270466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.270506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:54.712 [2024-12-06 00:09:27.270516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.592 ms 00:31:54.712 [2024-12-06 00:09:27.270523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.279083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.279122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:54.712 [2024-12-06 00:09:27.279131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.486 ms 00:31:54.712 [2024-12-06 00:09:27.279138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.279179] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:54.712 [2024-12-06 00:09:27.279194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:54.712 [2024-12-06 00:09:27.279205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:54.712 [2024-12-06 00:09:27.279212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:54.712 [2024-12-06 00:09:27.279220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:54.712 [2024-12-06 00:09:27.279324] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:54.712 [2024-12-06 00:09:27.279331] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 895f9941-c2a4-4387-825f-4634a31cfd5e 00:31:54.712 [2024-12-06 00:09:27.279339] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:54.712 [2024-12-06 00:09:27.279345] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:54.712 [2024-12-06 00:09:27.279351] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:54.712 [2024-12-06 00:09:27.279358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:54.712 [2024-12-06 00:09:27.279364] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:54.712 [2024-12-06 00:09:27.279371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:54.712 [2024-12-06 00:09:27.279383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:54.712 [2024-12-06 00:09:27.279389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:54.712 [2024-12-06 00:09:27.279397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:54.712 [2024-12-06 00:09:27.279407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.279415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:54.712 [2024-12-06 00:09:27.279423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.229 ms 00:31:54.712 [2024-12-06 00:09:27.279431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.291053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.291089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:54.712 [2024-12-06 00:09:27.291099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.592 ms 00:31:54.712 [2024-12-06 00:09:27.291107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.291457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:54.712 [2024-12-06 00:09:27.291475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:54.712 [2024-12-06 00:09:27.291485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.324 ms 00:31:54.712 [2024-12-06 00:09:27.291492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.330992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.712 [2024-12-06 00:09:27.331020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:54.712 [2024-12-06 00:09:27.331029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.712 [2024-12-06 00:09:27.331041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.331068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.712 [2024-12-06 00:09:27.331076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:54.712 [2024-12-06 00:09:27.331094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.712 [2024-12-06 00:09:27.331101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.331176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.712 [2024-12-06 00:09:27.331185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:54.712 [2024-12-06 00:09:27.331193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.712 [2024-12-06 00:09:27.331200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.331217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.712 [2024-12-06 00:09:27.331225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:54.712 [2024-12-06 00:09:27.331231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.712 [2024-12-06 00:09:27.331237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.712 [2024-12-06 00:09:27.396523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.712 [2024-12-06 00:09:27.396552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:54.713 [2024-12-06 00:09:27.396562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.713 [2024-12-06 00:09:27.396570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.973 [2024-12-06 00:09:27.447831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.973 [2024-12-06 00:09:27.447860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:54.973 [2024-12-06 00:09:27.447870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.973 [2024-12-06 00:09:27.447877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.973 [2024-12-06 00:09:27.447937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.973 [2024-12-06 00:09:27.447945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:54.973 [2024-12-06 00:09:27.447952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.973 [2024-12-06 00:09:27.447959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.973 [2024-12-06 00:09:27.448016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.974 [2024-12-06 00:09:27.448035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:54.974 [2024-12-06 00:09:27.448042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.974 [2024-12-06 00:09:27.448048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.974 [2024-12-06 00:09:27.448129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.974 [2024-12-06 00:09:27.448138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:54.974 [2024-12-06 00:09:27.448145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.974 [2024-12-06 00:09:27.448151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.974 [2024-12-06 00:09:27.448191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.974 [2024-12-06 00:09:27.448202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:54.974 [2024-12-06 00:09:27.448214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.974 [2024-12-06 00:09:27.448224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.974 [2024-12-06 00:09:27.448272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.974 [2024-12-06 00:09:27.448283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:54.974 [2024-12-06 00:09:27.448294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.974 [2024-12-06 00:09:27.448304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.974 [2024-12-06 00:09:27.448359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:54.974 [2024-12-06 00:09:27.448374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:54.974 [2024-12-06 00:09:27.448384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:54.974 [2024-12-06 00:09:27.448394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:54.974 [2024-12-06 00:09:27.448526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 232.973 ms, result 0 00:31:55.959 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:55.959 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:55.959 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:55.959 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:55.959 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:55.959 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:55.959 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:55.959 Remove shared memory files 00:31:55.959 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:55.959 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:55.959 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:55.960 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84146 00:31:55.960 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:55.960 00:09:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:55.960 00:31:55.960 real 1m25.193s 00:31:55.960 user 1m56.510s 00:31:55.960 sys 0m19.858s 00:31:55.960 00:09:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.960 00:09:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:55.960 ************************************ 00:31:55.960 END TEST ftl_upgrade_shutdown 00:31:55.960 ************************************ 00:31:55.960 00:09:28 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:55.960 00:09:28 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:55.960 00:09:28 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:55.960 00:09:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.960 00:09:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:55.960 ************************************ 00:31:55.960 START TEST ftl_restore_fast 00:31:55.960 ************************************ 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:55.960 * Looking for test storage... 00:31:55.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:55.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.960 --rc genhtml_branch_coverage=1 00:31:55.960 --rc genhtml_function_coverage=1 00:31:55.960 --rc genhtml_legend=1 00:31:55.960 --rc geninfo_all_blocks=1 00:31:55.960 --rc geninfo_unexecuted_blocks=1 00:31:55.960 00:31:55.960 ' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:55.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.960 --rc genhtml_branch_coverage=1 00:31:55.960 --rc genhtml_function_coverage=1 00:31:55.960 --rc genhtml_legend=1 00:31:55.960 --rc geninfo_all_blocks=1 00:31:55.960 --rc geninfo_unexecuted_blocks=1 00:31:55.960 00:31:55.960 ' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:55.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.960 --rc genhtml_branch_coverage=1 00:31:55.960 --rc genhtml_function_coverage=1 00:31:55.960 --rc genhtml_legend=1 00:31:55.960 --rc geninfo_all_blocks=1 00:31:55.960 --rc geninfo_unexecuted_blocks=1 00:31:55.960 00:31:55.960 ' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:55.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.960 --rc genhtml_branch_coverage=1 00:31:55.960 --rc genhtml_function_coverage=1 00:31:55.960 --rc genhtml_legend=1 00:31:55.960 --rc geninfo_all_blocks=1 00:31:55.960 --rc geninfo_unexecuted_blocks=1 00:31:55.960 00:31:55.960 ' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.76uVJ5Z0bV 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=84637 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 84637 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 84637 ']' 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.960 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.961 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.961 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.961 00:09:28 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:56.245 [2024-12-06 00:09:28.749998] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:31:56.245 [2024-12-06 00:09:28.750300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84637 ] 00:31:56.245 [2024-12-06 00:09:28.917010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.505 [2024-12-06 00:09:29.022543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.077 00:09:29 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.077 00:09:29 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:57.077 00:09:29 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:57.077 00:09:29 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:57.077 00:09:29 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:57.077 00:09:29 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:57.077 00:09:29 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:57.077 00:09:29 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:57.337 00:09:29 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:57.337 00:09:29 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:57.337 00:09:29 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:57.337 00:09:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:57.337 00:09:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:57.337 00:09:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:57.337 00:09:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:57.337 00:09:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:57.598 { 00:31:57.598 "name": "nvme0n1", 00:31:57.598 "aliases": [ 00:31:57.598 "13c45d13-2cc8-4e15-a0c2-88543f41c6b6" 00:31:57.598 ], 00:31:57.598 "product_name": "NVMe disk", 00:31:57.598 "block_size": 4096, 00:31:57.598 "num_blocks": 1310720, 00:31:57.598 "uuid": "13c45d13-2cc8-4e15-a0c2-88543f41c6b6", 00:31:57.598 "numa_id": -1, 00:31:57.598 "assigned_rate_limits": { 00:31:57.598 "rw_ios_per_sec": 0, 00:31:57.598 "rw_mbytes_per_sec": 0, 00:31:57.598 "r_mbytes_per_sec": 0, 00:31:57.598 "w_mbytes_per_sec": 0 00:31:57.598 }, 00:31:57.598 "claimed": true, 00:31:57.598 "claim_type": "read_many_write_one", 00:31:57.598 "zoned": false, 00:31:57.598 "supported_io_types": { 00:31:57.598 "read": true, 00:31:57.598 "write": true, 00:31:57.598 "unmap": true, 00:31:57.598 "flush": true, 00:31:57.598 "reset": true, 00:31:57.598 "nvme_admin": true, 00:31:57.598 "nvme_io": true, 00:31:57.598 "nvme_io_md": false, 00:31:57.598 "write_zeroes": true, 00:31:57.598 "zcopy": false, 00:31:57.598 "get_zone_info": false, 00:31:57.598 "zone_management": false, 00:31:57.598 "zone_append": false, 00:31:57.598 "compare": true, 00:31:57.598 "compare_and_write": false, 00:31:57.598 "abort": true, 00:31:57.598 "seek_hole": false, 00:31:57.598 "seek_data": false, 00:31:57.598 "copy": true, 00:31:57.598 "nvme_iov_md": false 00:31:57.598 }, 00:31:57.598 "driver_specific": { 00:31:57.598 "nvme": [ 00:31:57.598 { 00:31:57.598 "pci_address": "0000:00:11.0", 00:31:57.598 "trid": { 00:31:57.598 "trtype": "PCIe", 00:31:57.598 "traddr": "0000:00:11.0" 00:31:57.598 }, 00:31:57.598 "ctrlr_data": { 00:31:57.598 "cntlid": 0, 00:31:57.598 "vendor_id": "0x1b36", 00:31:57.598 "model_number": "QEMU NVMe Ctrl", 00:31:57.598 "serial_number": "12341", 00:31:57.598 "firmware_revision": "8.0.0", 00:31:57.598 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:57.598 "oacs": { 00:31:57.598 "security": 0, 00:31:57.598 "format": 1, 00:31:57.598 "firmware": 0, 00:31:57.598 "ns_manage": 1 00:31:57.598 }, 00:31:57.598 "multi_ctrlr": false, 00:31:57.598 "ana_reporting": false 00:31:57.598 }, 00:31:57.598 "vs": { 00:31:57.598 "nvme_version": "1.4" 00:31:57.598 }, 00:31:57.598 "ns_data": { 00:31:57.598 "id": 1, 00:31:57.598 "can_share": false 00:31:57.598 } 00:31:57.598 } 00:31:57.598 ], 00:31:57.598 "mp_policy": "active_passive" 00:31:57.598 } 00:31:57.598 } 00:31:57.598 ]' 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:57.598 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:57.858 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=57622697-33a1-4db6-b776-4f1d7be26bf1 00:31:57.858 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:57.858 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57622697-33a1-4db6-b776-4f1d7be26bf1 00:31:58.117 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:58.375 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=0795ea50-0180-4d5e-a865-06c1bf296143 00:31:58.375 00:09:30 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0795ea50-0180-4d5e-a865-06c1bf296143 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:58.375 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:58.635 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:58.635 { 00:31:58.635 "name": "8d157cc1-34d2-4c16-b96d-3cfe61739382", 00:31:58.635 "aliases": [ 00:31:58.635 "lvs/nvme0n1p0" 00:31:58.635 ], 00:31:58.635 "product_name": "Logical Volume", 00:31:58.635 "block_size": 4096, 00:31:58.635 "num_blocks": 26476544, 00:31:58.635 "uuid": "8d157cc1-34d2-4c16-b96d-3cfe61739382", 00:31:58.635 "assigned_rate_limits": { 00:31:58.635 "rw_ios_per_sec": 0, 00:31:58.635 "rw_mbytes_per_sec": 0, 00:31:58.635 "r_mbytes_per_sec": 0, 00:31:58.635 "w_mbytes_per_sec": 0 00:31:58.635 }, 00:31:58.635 "claimed": false, 00:31:58.635 "zoned": false, 00:31:58.635 "supported_io_types": { 00:31:58.635 "read": true, 00:31:58.635 "write": true, 00:31:58.635 "unmap": true, 00:31:58.635 "flush": false, 00:31:58.635 "reset": true, 00:31:58.635 "nvme_admin": false, 00:31:58.635 "nvme_io": false, 00:31:58.635 "nvme_io_md": false, 00:31:58.635 "write_zeroes": true, 00:31:58.635 "zcopy": false, 00:31:58.635 "get_zone_info": false, 00:31:58.635 "zone_management": false, 00:31:58.635 "zone_append": false, 00:31:58.635 "compare": false, 00:31:58.635 "compare_and_write": false, 00:31:58.635 "abort": false, 00:31:58.635 "seek_hole": true, 00:31:58.635 "seek_data": true, 00:31:58.635 "copy": false, 00:31:58.635 "nvme_iov_md": false 00:31:58.635 }, 00:31:58.635 "driver_specific": { 00:31:58.635 "lvol": { 00:31:58.635 "lvol_store_uuid": "0795ea50-0180-4d5e-a865-06c1bf296143", 00:31:58.635 "base_bdev": "nvme0n1", 00:31:58.635 "thin_provision": true, 00:31:58.635 "num_allocated_clusters": 0, 00:31:58.635 "snapshot": false, 00:31:58.635 "clone": false, 00:31:58.635 "esnap_clone": false 00:31:58.635 } 00:31:58.635 } 00:31:58.635 } 00:31:58.635 ]' 00:31:58.635 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:58.635 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:58.635 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:58.635 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:58.635 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:58.635 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:58.635 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:58.635 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:58.635 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:58.896 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:58.896 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:58.896 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:58.896 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:58.896 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:58.896 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:58.896 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:58.896 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:59.158 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:59.158 { 00:31:59.158 "name": "8d157cc1-34d2-4c16-b96d-3cfe61739382", 00:31:59.158 "aliases": [ 00:31:59.158 "lvs/nvme0n1p0" 00:31:59.158 ], 00:31:59.158 "product_name": "Logical Volume", 00:31:59.158 "block_size": 4096, 00:31:59.158 "num_blocks": 26476544, 00:31:59.158 "uuid": "8d157cc1-34d2-4c16-b96d-3cfe61739382", 00:31:59.158 "assigned_rate_limits": { 00:31:59.158 "rw_ios_per_sec": 0, 00:31:59.158 "rw_mbytes_per_sec": 0, 00:31:59.158 "r_mbytes_per_sec": 0, 00:31:59.158 "w_mbytes_per_sec": 0 00:31:59.158 }, 00:31:59.158 "claimed": false, 00:31:59.158 "zoned": false, 00:31:59.158 "supported_io_types": { 00:31:59.158 "read": true, 00:31:59.158 "write": true, 00:31:59.158 "unmap": true, 00:31:59.158 "flush": false, 00:31:59.158 "reset": true, 00:31:59.158 "nvme_admin": false, 00:31:59.158 "nvme_io": false, 00:31:59.158 "nvme_io_md": false, 00:31:59.158 "write_zeroes": true, 00:31:59.158 "zcopy": false, 00:31:59.158 "get_zone_info": false, 00:31:59.158 "zone_management": false, 00:31:59.158 "zone_append": false, 00:31:59.158 "compare": false, 00:31:59.158 "compare_and_write": false, 00:31:59.158 "abort": false, 00:31:59.158 "seek_hole": true, 00:31:59.158 "seek_data": true, 00:31:59.158 "copy": false, 00:31:59.158 "nvme_iov_md": false 00:31:59.158 }, 00:31:59.158 "driver_specific": { 00:31:59.158 "lvol": { 00:31:59.158 "lvol_store_uuid": "0795ea50-0180-4d5e-a865-06c1bf296143", 00:31:59.158 "base_bdev": "nvme0n1", 00:31:59.158 "thin_provision": true, 00:31:59.158 "num_allocated_clusters": 0, 00:31:59.158 "snapshot": false, 00:31:59.158 "clone": false, 00:31:59.158 "esnap_clone": false 00:31:59.158 } 00:31:59.158 } 00:31:59.158 } 00:31:59.158 ]' 00:31:59.158 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:59.159 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:59.159 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:59.159 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:59.159 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:59.159 00:09:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:59.159 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:59.159 00:09:31 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:59.420 00:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:59.420 00:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:59.420 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:59.420 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:59.420 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:59.420 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:59.420 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d157cc1-34d2-4c16-b96d-3cfe61739382 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:59.681 { 00:31:59.681 "name": "8d157cc1-34d2-4c16-b96d-3cfe61739382", 00:31:59.681 "aliases": [ 00:31:59.681 "lvs/nvme0n1p0" 00:31:59.681 ], 00:31:59.681 "product_name": "Logical Volume", 00:31:59.681 "block_size": 4096, 00:31:59.681 "num_blocks": 26476544, 00:31:59.681 "uuid": "8d157cc1-34d2-4c16-b96d-3cfe61739382", 00:31:59.681 "assigned_rate_limits": { 00:31:59.681 "rw_ios_per_sec": 0, 00:31:59.681 "rw_mbytes_per_sec": 0, 00:31:59.681 "r_mbytes_per_sec": 0, 00:31:59.681 "w_mbytes_per_sec": 0 00:31:59.681 }, 00:31:59.681 "claimed": false, 00:31:59.681 "zoned": false, 00:31:59.681 "supported_io_types": { 00:31:59.681 "read": true, 00:31:59.681 "write": true, 00:31:59.681 "unmap": true, 00:31:59.681 "flush": false, 00:31:59.681 "reset": true, 00:31:59.681 "nvme_admin": false, 00:31:59.681 "nvme_io": false, 00:31:59.681 "nvme_io_md": false, 00:31:59.681 "write_zeroes": true, 00:31:59.681 "zcopy": false, 00:31:59.681 "get_zone_info": false, 00:31:59.681 "zone_management": false, 00:31:59.681 "zone_append": false, 00:31:59.681 "compare": false, 00:31:59.681 "compare_and_write": false, 00:31:59.681 "abort": false, 00:31:59.681 "seek_hole": true, 00:31:59.681 "seek_data": true, 00:31:59.681 "copy": false, 00:31:59.681 "nvme_iov_md": false 00:31:59.681 }, 00:31:59.681 "driver_specific": { 00:31:59.681 "lvol": { 00:31:59.681 "lvol_store_uuid": "0795ea50-0180-4d5e-a865-06c1bf296143", 00:31:59.681 "base_bdev": "nvme0n1", 00:31:59.681 "thin_provision": true, 00:31:59.681 "num_allocated_clusters": 0, 00:31:59.681 "snapshot": false, 00:31:59.681 "clone": false, 00:31:59.681 "esnap_clone": false 00:31:59.681 } 00:31:59.681 } 00:31:59.681 } 00:31:59.681 ]' 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8d157cc1-34d2-4c16-b96d-3cfe61739382 --l2p_dram_limit 10' 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:59.681 00:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8d157cc1-34d2-4c16-b96d-3cfe61739382 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:59.943 [2024-12-06 00:09:32.516847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.516883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:59.943 [2024-12-06 00:09:32.516895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:59.943 [2024-12-06 00:09:32.516902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.516945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.516953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:59.943 [2024-12-06 00:09:32.516960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:59.943 [2024-12-06 00:09:32.516979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.516998] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:59.943 [2024-12-06 00:09:32.517538] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:59.943 [2024-12-06 00:09:32.517558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.517564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:59.943 [2024-12-06 00:09:32.517573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:31:59.943 [2024-12-06 00:09:32.517579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.517603] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4fb7d2de-2db9-4360-a868-f6ce287ca9bb 00:31:59.943 [2024-12-06 00:09:32.518515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.518533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:59.943 [2024-12-06 00:09:32.518541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:59.943 [2024-12-06 00:09:32.518548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.523174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.523207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:59.943 [2024-12-06 00:09:32.523215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.572 ms 00:31:59.943 [2024-12-06 00:09:32.523221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.523287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.523295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:59.943 [2024-12-06 00:09:32.523302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:59.943 [2024-12-06 00:09:32.523311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.523339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.523348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:59.943 [2024-12-06 00:09:32.523356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:59.943 [2024-12-06 00:09:32.523363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.523378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:59.943 [2024-12-06 00:09:32.526222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.526246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:59.943 [2024-12-06 00:09:32.526255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:31:59.943 [2024-12-06 00:09:32.526260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.526288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.526295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:59.943 [2024-12-06 00:09:32.526302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:59.943 [2024-12-06 00:09:32.526308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.526327] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:59.943 [2024-12-06 00:09:32.526434] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:59.943 [2024-12-06 00:09:32.526447] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:59.943 [2024-12-06 00:09:32.526455] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:59.943 [2024-12-06 00:09:32.526465] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:59.943 [2024-12-06 00:09:32.526471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:59.943 [2024-12-06 00:09:32.526478] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:59.943 [2024-12-06 00:09:32.526484] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:59.943 [2024-12-06 00:09:32.526493] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:59.943 [2024-12-06 00:09:32.526499] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:59.943 [2024-12-06 00:09:32.526506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.526517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:59.943 [2024-12-06 00:09:32.526525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:31:59.943 [2024-12-06 00:09:32.526530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.526595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.943 [2024-12-06 00:09:32.526602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:59.943 [2024-12-06 00:09:32.526608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:59.943 [2024-12-06 00:09:32.526614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.943 [2024-12-06 00:09:32.526691] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:59.943 [2024-12-06 00:09:32.526698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:59.943 [2024-12-06 00:09:32.526706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:59.943 [2024-12-06 00:09:32.526711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.943 [2024-12-06 00:09:32.526718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:59.943 [2024-12-06 00:09:32.526723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:59.943 [2024-12-06 00:09:32.526730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:59.943 [2024-12-06 00:09:32.526735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:59.943 [2024-12-06 00:09:32.526742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:59.943 [2024-12-06 00:09:32.526747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:59.943 [2024-12-06 00:09:32.526753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:59.943 [2024-12-06 00:09:32.526759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:59.943 [2024-12-06 00:09:32.526766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:59.943 [2024-12-06 00:09:32.526771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:59.943 [2024-12-06 00:09:32.526779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:59.943 [2024-12-06 00:09:32.526784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.943 [2024-12-06 00:09:32.526792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:59.943 [2024-12-06 00:09:32.526798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:59.943 [2024-12-06 00:09:32.526804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.943 [2024-12-06 00:09:32.526809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:59.943 [2024-12-06 00:09:32.526815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:59.943 [2024-12-06 00:09:32.526820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.943 [2024-12-06 00:09:32.526826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:59.943 [2024-12-06 00:09:32.526831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:59.943 [2024-12-06 00:09:32.526838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.943 [2024-12-06 00:09:32.526843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:59.943 [2024-12-06 00:09:32.526849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:59.943 [2024-12-06 00:09:32.526854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.943 [2024-12-06 00:09:32.526860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:59.943 [2024-12-06 00:09:32.526865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:59.943 [2024-12-06 00:09:32.526871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.943 [2024-12-06 00:09:32.526876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:59.943 [2024-12-06 00:09:32.526883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:59.943 [2024-12-06 00:09:32.526888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:59.943 [2024-12-06 00:09:32.526896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:59.943 [2024-12-06 00:09:32.526901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:59.943 [2024-12-06 00:09:32.526907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:59.943 [2024-12-06 00:09:32.526912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:59.943 [2024-12-06 00:09:32.526918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:59.944 [2024-12-06 00:09:32.526923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.944 [2024-12-06 00:09:32.526929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:59.944 [2024-12-06 00:09:32.526934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:59.944 [2024-12-06 00:09:32.526940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.944 [2024-12-06 00:09:32.526945] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:59.944 [2024-12-06 00:09:32.526952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:59.944 [2024-12-06 00:09:32.526957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:59.944 [2024-12-06 00:09:32.526977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.944 [2024-12-06 00:09:32.526984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:59.944 [2024-12-06 00:09:32.526992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:59.944 [2024-12-06 00:09:32.526998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:59.944 [2024-12-06 00:09:32.527005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:59.944 [2024-12-06 00:09:32.527011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:59.944 [2024-12-06 00:09:32.527017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:59.944 [2024-12-06 00:09:32.527024] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:59.944 [2024-12-06 00:09:32.527034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:59.944 [2024-12-06 00:09:32.527040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:59.944 [2024-12-06 00:09:32.527047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:59.944 [2024-12-06 00:09:32.527053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:59.944 [2024-12-06 00:09:32.527059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:59.944 [2024-12-06 00:09:32.527065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:59.944 [2024-12-06 00:09:32.527078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:59.944 [2024-12-06 00:09:32.527083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:59.944 [2024-12-06 00:09:32.527090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:59.944 [2024-12-06 00:09:32.527095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:59.944 [2024-12-06 00:09:32.527103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:59.944 [2024-12-06 00:09:32.527108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:59.944 [2024-12-06 00:09:32.527115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:59.944 [2024-12-06 00:09:32.527120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:59.944 [2024-12-06 00:09:32.527127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:59.944 [2024-12-06 00:09:32.527132] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:59.944 [2024-12-06 00:09:32.527140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:59.944 [2024-12-06 00:09:32.527146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:59.944 [2024-12-06 00:09:32.527153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:59.944 [2024-12-06 00:09:32.527158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:59.944 [2024-12-06 00:09:32.527165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:59.944 [2024-12-06 00:09:32.527170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.944 [2024-12-06 00:09:32.527178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:59.944 [2024-12-06 00:09:32.527183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:31:59.944 [2024-12-06 00:09:32.527191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.944 [2024-12-06 00:09:32.527219] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:59.944 [2024-12-06 00:09:32.527229] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:03.246 [2024-12-06 00:09:35.933241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.246 [2024-12-06 00:09:35.933321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:03.246 [2024-12-06 00:09:35.933337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3406.007 ms 00:32:03.246 [2024-12-06 00:09:35.933349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:35.963536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:35.963797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:03.507 [2024-12-06 00:09:35.963820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.957 ms 00:32:03.507 [2024-12-06 00:09:35.963832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:35.963993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:35.964008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:03.507 [2024-12-06 00:09:35.964019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:32:03.507 [2024-12-06 00:09:35.964035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:35.999269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:35.999319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:03.507 [2024-12-06 00:09:35.999331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.181 ms 00:32:03.507 [2024-12-06 00:09:35.999342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:35.999375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:35.999391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:03.507 [2024-12-06 00:09:35.999400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:03.507 [2024-12-06 00:09:35.999418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:36.000006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:36.000035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:03.507 [2024-12-06 00:09:36.000046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:32:03.507 [2024-12-06 00:09:36.000056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:36.000198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:36.000213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:03.507 [2024-12-06 00:09:36.000225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:32:03.507 [2024-12-06 00:09:36.000239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:36.017375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:36.017427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:03.507 [2024-12-06 00:09:36.017439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.116 ms 00:32:03.507 [2024-12-06 00:09:36.017449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:36.052640] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:03.507 [2024-12-06 00:09:36.056484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:36.056539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:03.507 [2024-12-06 00:09:36.056555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.944 ms 00:32:03.507 [2024-12-06 00:09:36.056564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:36.155125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:36.155270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:03.507 [2024-12-06 00:09:36.155293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.514 ms 00:32:03.507 [2024-12-06 00:09:36.155301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:36.155519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:36.155541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:03.507 [2024-12-06 00:09:36.155554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:32:03.507 [2024-12-06 00:09:36.155562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:36.179349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:36.179383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:03.507 [2024-12-06 00:09:36.179397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.743 ms 00:32:03.507 [2024-12-06 00:09:36.179405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:36.202464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:36.202589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:03.507 [2024-12-06 00:09:36.202609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.020 ms 00:32:03.507 [2024-12-06 00:09:36.202617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.507 [2024-12-06 00:09:36.203242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.507 [2024-12-06 00:09:36.203261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:03.507 [2024-12-06 00:09:36.203272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:32:03.507 [2024-12-06 00:09:36.203281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.767 [2024-12-06 00:09:36.278357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.767 [2024-12-06 00:09:36.278400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:03.767 [2024-12-06 00:09:36.278416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.039 ms 00:32:03.768 [2024-12-06 00:09:36.278425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.768 [2024-12-06 00:09:36.303503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.768 [2024-12-06 00:09:36.303537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:03.768 [2024-12-06 00:09:36.303550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.995 ms 00:32:03.768 [2024-12-06 00:09:36.303558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.768 [2024-12-06 00:09:36.327613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.768 [2024-12-06 00:09:36.327646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:03.768 [2024-12-06 00:09:36.327659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.014 ms 00:32:03.768 [2024-12-06 00:09:36.327666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.768 [2024-12-06 00:09:36.351763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.768 [2024-12-06 00:09:36.351800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:03.768 [2024-12-06 00:09:36.351813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.054 ms 00:32:03.768 [2024-12-06 00:09:36.351821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.768 [2024-12-06 00:09:36.351864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.768 [2024-12-06 00:09:36.351873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:03.768 [2024-12-06 00:09:36.351886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:03.768 [2024-12-06 00:09:36.351894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.768 [2024-12-06 00:09:36.351991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.768 [2024-12-06 00:09:36.352005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:03.768 [2024-12-06 00:09:36.352015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:03.768 [2024-12-06 00:09:36.352024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.768 [2024-12-06 00:09:36.353371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3836.032 ms, result 0 00:32:03.768 { 00:32:03.768 "name": "ftl0", 00:32:03.768 "uuid": "4fb7d2de-2db9-4360-a868-f6ce287ca9bb" 00:32:03.768 } 00:32:03.768 00:09:36 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:32:03.768 00:09:36 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:04.029 00:09:36 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:32:04.029 00:09:36 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:04.292 [2024-12-06 00:09:36.780539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.780747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:04.292 [2024-12-06 00:09:36.780770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:04.292 [2024-12-06 00:09:36.780782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.780814] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:04.292 [2024-12-06 00:09:36.783873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.784043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:04.292 [2024-12-06 00:09:36.784068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:32:04.292 [2024-12-06 00:09:36.784078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.784403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.784420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:04.292 [2024-12-06 00:09:36.784432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:32:04.292 [2024-12-06 00:09:36.784441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.787678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.787801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:04.292 [2024-12-06 00:09:36.787819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:32:04.292 [2024-12-06 00:09:36.787827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.794021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.794169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:04.292 [2024-12-06 00:09:36.794196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.166 ms 00:32:04.292 [2024-12-06 00:09:36.794204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.820658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.820706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:04.292 [2024-12-06 00:09:36.820721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.376 ms 00:32:04.292 [2024-12-06 00:09:36.820729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.838653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.838706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:04.292 [2024-12-06 00:09:36.838723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.866 ms 00:32:04.292 [2024-12-06 00:09:36.838731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.838904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.838917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:04.292 [2024-12-06 00:09:36.838929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:32:04.292 [2024-12-06 00:09:36.838937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.864590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.864764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:04.292 [2024-12-06 00:09:36.864789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.627 ms 00:32:04.292 [2024-12-06 00:09:36.864798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.889582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.889628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:04.292 [2024-12-06 00:09:36.889643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.738 ms 00:32:04.292 [2024-12-06 00:09:36.889650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.914203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.914248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:04.292 [2024-12-06 00:09:36.914262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.496 ms 00:32:04.292 [2024-12-06 00:09:36.914269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.938972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.292 [2024-12-06 00:09:36.939133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:04.292 [2024-12-06 00:09:36.939159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.597 ms 00:32:04.292 [2024-12-06 00:09:36.939166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.292 [2024-12-06 00:09:36.939206] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:04.292 [2024-12-06 00:09:36.939221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:04.292 [2024-12-06 00:09:36.939237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:04.292 [2024-12-06 00:09:36.939245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:04.292 [2024-12-06 00:09:36.939256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.939991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:04.293 [2024-12-06 00:09:36.940092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:04.294 [2024-12-06 00:09:36.940102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:04.294 [2024-12-06 00:09:36.940110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:04.294 [2024-12-06 00:09:36.940121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:04.294 [2024-12-06 00:09:36.940129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:04.294 [2024-12-06 00:09:36.940139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:04.294 [2024-12-06 00:09:36.940147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:04.294 [2024-12-06 00:09:36.940159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:04.294 [2024-12-06 00:09:36.940188] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:04.294 [2024-12-06 00:09:36.940199] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4fb7d2de-2db9-4360-a868-f6ce287ca9bb 00:32:04.294 [2024-12-06 00:09:36.940207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:04.294 [2024-12-06 00:09:36.940219] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:04.294 [2024-12-06 00:09:36.940231] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:04.294 [2024-12-06 00:09:36.940242] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:04.294 [2024-12-06 00:09:36.940251] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:04.294 [2024-12-06 00:09:36.940261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:04.294 [2024-12-06 00:09:36.940269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:04.294 [2024-12-06 00:09:36.940277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:04.294 [2024-12-06 00:09:36.940283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:04.294 [2024-12-06 00:09:36.940293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.294 [2024-12-06 00:09:36.940300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:04.294 [2024-12-06 00:09:36.940311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:32:04.294 [2024-12-06 00:09:36.940321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.294 [2024-12-06 00:09:36.954112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.294 [2024-12-06 00:09:36.954282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:04.294 [2024-12-06 00:09:36.954305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.745 ms 00:32:04.294 [2024-12-06 00:09:36.954315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.294 [2024-12-06 00:09:36.954730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.294 [2024-12-06 00:09:36.954766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:04.294 [2024-12-06 00:09:36.954782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:32:04.294 [2024-12-06 00:09:36.954790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.001403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.001576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.555 [2024-12-06 00:09:37.001601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.001610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.001688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.001697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.555 [2024-12-06 00:09:37.001711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.001720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.001820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.001831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.555 [2024-12-06 00:09:37.001842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.001850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.001873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.001882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.555 [2024-12-06 00:09:37.001892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.001902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.085609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.085661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.555 [2024-12-06 00:09:37.085677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.085686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.154758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.154813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.555 [2024-12-06 00:09:37.154828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.154839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.154924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.154935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.555 [2024-12-06 00:09:37.154946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.154955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.155058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.155069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.555 [2024-12-06 00:09:37.155081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.155089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.155202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.155214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.555 [2024-12-06 00:09:37.155225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.155233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.155270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.155280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:04.555 [2024-12-06 00:09:37.155290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.155298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.155347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.155358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.555 [2024-12-06 00:09:37.155368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.155376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.155430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.555 [2024-12-06 00:09:37.155441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.555 [2024-12-06 00:09:37.155451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.555 [2024-12-06 00:09:37.155460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.555 [2024-12-06 00:09:37.155609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.034 ms, result 0 00:32:04.555 true 00:32:04.555 00:09:37 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 84637 00:32:04.555 00:09:37 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84637 ']' 00:32:04.556 00:09:37 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84637 00:32:04.556 00:09:37 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:32:04.556 00:09:37 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.556 00:09:37 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84637 00:32:04.556 00:09:37 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:04.556 00:09:37 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:04.556 00:09:37 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84637' 00:32:04.556 killing process with pid 84637 00:32:04.556 00:09:37 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 84637 00:32:04.556 00:09:37 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 84637 00:32:11.147 00:09:43 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:15.350 262144+0 records in 00:32:15.350 262144+0 records out 00:32:15.350 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.15602 s, 258 MB/s 00:32:15.350 00:09:47 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:16.732 00:09:49 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:16.732 [2024-12-06 00:09:49.264952] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:32:16.732 [2024-12-06 00:09:49.265057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84862 ] 00:32:16.732 [2024-12-06 00:09:49.410799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.992 [2024-12-06 00:09:49.486317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.992 [2024-12-06 00:09:49.696157] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:16.992 [2024-12-06 00:09:49.696221] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:17.254 [2024-12-06 00:09:49.847255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.847295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:17.254 [2024-12-06 00:09:49.847305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:17.254 [2024-12-06 00:09:49.847312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.847346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.847356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:17.254 [2024-12-06 00:09:49.847362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:32:17.254 [2024-12-06 00:09:49.847368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.847380] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:17.254 [2024-12-06 00:09:49.847931] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:17.254 [2024-12-06 00:09:49.847942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.847948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:17.254 [2024-12-06 00:09:49.847955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:32:17.254 [2024-12-06 00:09:49.847960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.848940] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:17.254 [2024-12-06 00:09:49.858740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.858770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:17.254 [2024-12-06 00:09:49.858779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.801 ms 00:32:17.254 [2024-12-06 00:09:49.858785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.858830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.858838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:17.254 [2024-12-06 00:09:49.858844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:17.254 [2024-12-06 00:09:49.858850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.863360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.863488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:17.254 [2024-12-06 00:09:49.863500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.463 ms 00:32:17.254 [2024-12-06 00:09:49.863510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.863565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.863572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:17.254 [2024-12-06 00:09:49.863578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:17.254 [2024-12-06 00:09:49.863584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.863617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.863625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:17.254 [2024-12-06 00:09:49.863631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:17.254 [2024-12-06 00:09:49.863637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.863652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:17.254 [2024-12-06 00:09:49.866403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.866427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:17.254 [2024-12-06 00:09:49.866436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:32:17.254 [2024-12-06 00:09:49.866442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.866470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.866477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:17.254 [2024-12-06 00:09:49.866484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:17.254 [2024-12-06 00:09:49.866489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.866502] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:17.254 [2024-12-06 00:09:49.866517] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:17.254 [2024-12-06 00:09:49.866543] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:17.254 [2024-12-06 00:09:49.866557] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:17.254 [2024-12-06 00:09:49.866636] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:17.254 [2024-12-06 00:09:49.866643] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:17.254 [2024-12-06 00:09:49.866651] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:17.254 [2024-12-06 00:09:49.866659] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:17.254 [2024-12-06 00:09:49.866666] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:17.254 [2024-12-06 00:09:49.866672] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:17.254 [2024-12-06 00:09:49.866677] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:17.254 [2024-12-06 00:09:49.866684] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:17.254 [2024-12-06 00:09:49.866690] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:17.254 [2024-12-06 00:09:49.866696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.866701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:17.254 [2024-12-06 00:09:49.866707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:32:17.254 [2024-12-06 00:09:49.866713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.866775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.254 [2024-12-06 00:09:49.866781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:17.254 [2024-12-06 00:09:49.866786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:17.254 [2024-12-06 00:09:49.866792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.254 [2024-12-06 00:09:49.866867] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:17.255 [2024-12-06 00:09:49.866875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:17.255 [2024-12-06 00:09:49.866881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:17.255 [2024-12-06 00:09:49.866887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.255 [2024-12-06 00:09:49.866893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:17.255 [2024-12-06 00:09:49.866898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:17.255 [2024-12-06 00:09:49.866903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:17.255 [2024-12-06 00:09:49.866909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:17.255 [2024-12-06 00:09:49.866914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:17.255 [2024-12-06 00:09:49.866919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:17.255 [2024-12-06 00:09:49.866925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:17.255 [2024-12-06 00:09:49.866930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:17.255 [2024-12-06 00:09:49.866935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:17.255 [2024-12-06 00:09:49.866943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:17.255 [2024-12-06 00:09:49.866949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:17.255 [2024-12-06 00:09:49.866955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.255 [2024-12-06 00:09:49.866960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:17.255 [2024-12-06 00:09:49.867086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:17.255 [2024-12-06 00:09:49.867110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.255 [2024-12-06 00:09:49.867126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:17.255 [2024-12-06 00:09:49.867140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:17.255 [2024-12-06 00:09:49.867154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.255 [2024-12-06 00:09:49.867168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:17.255 [2024-12-06 00:09:49.867216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:17.255 [2024-12-06 00:09:49.867233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.255 [2024-12-06 00:09:49.867247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:17.255 [2024-12-06 00:09:49.867261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:17.255 [2024-12-06 00:09:49.867275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.255 [2024-12-06 00:09:49.867288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:17.255 [2024-12-06 00:09:49.867302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:17.255 [2024-12-06 00:09:49.867336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.255 [2024-12-06 00:09:49.867352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:17.255 [2024-12-06 00:09:49.867366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:17.255 [2024-12-06 00:09:49.867380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:17.255 [2024-12-06 00:09:49.867394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:17.255 [2024-12-06 00:09:49.867407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:17.255 [2024-12-06 00:09:49.867421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:17.255 [2024-12-06 00:09:49.867453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:17.255 [2024-12-06 00:09:49.867470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:17.255 [2024-12-06 00:09:49.867548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.255 [2024-12-06 00:09:49.867564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:17.255 [2024-12-06 00:09:49.867578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:17.255 [2024-12-06 00:09:49.867592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.255 [2024-12-06 00:09:49.867606] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:17.255 [2024-12-06 00:09:49.867620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:17.255 [2024-12-06 00:09:49.867635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:17.255 [2024-12-06 00:09:49.867648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.255 [2024-12-06 00:09:49.867691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:17.255 [2024-12-06 00:09:49.867707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:17.255 [2024-12-06 00:09:49.867722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:17.255 [2024-12-06 00:09:49.867736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:17.255 [2024-12-06 00:09:49.867750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:17.255 [2024-12-06 00:09:49.867764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:17.255 [2024-12-06 00:09:49.867779] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:17.255 [2024-12-06 00:09:49.867803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:17.255 [2024-12-06 00:09:49.867874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:17.255 [2024-12-06 00:09:49.867880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:17.255 [2024-12-06 00:09:49.867889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:17.255 [2024-12-06 00:09:49.867894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:17.255 [2024-12-06 00:09:49.867900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:17.255 [2024-12-06 00:09:49.867906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:17.255 [2024-12-06 00:09:49.867916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:17.255 [2024-12-06 00:09:49.867922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:17.255 [2024-12-06 00:09:49.867928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:17.255 [2024-12-06 00:09:49.867933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:17.255 [2024-12-06 00:09:49.867938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:17.255 [2024-12-06 00:09:49.867944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:17.255 [2024-12-06 00:09:49.867949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:17.255 [2024-12-06 00:09:49.867960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:17.255 [2024-12-06 00:09:49.867975] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:17.255 [2024-12-06 00:09:49.867982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:17.255 [2024-12-06 00:09:49.867992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:17.255 [2024-12-06 00:09:49.867998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:17.255 [2024-12-06 00:09:49.868004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:17.255 [2024-12-06 00:09:49.868009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:17.255 [2024-12-06 00:09:49.868015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.255 [2024-12-06 00:09:49.868021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:17.255 [2024-12-06 00:09:49.868027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:32:17.255 [2024-12-06 00:09:49.868033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.255 [2024-12-06 00:09:49.889073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.255 [2024-12-06 00:09:49.889101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:17.255 [2024-12-06 00:09:49.889109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.989 ms 00:32:17.255 [2024-12-06 00:09:49.889117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.255 [2024-12-06 00:09:49.889180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.255 [2024-12-06 00:09:49.889186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:17.255 [2024-12-06 00:09:49.889193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:17.255 [2024-12-06 00:09:49.889198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.255 [2024-12-06 00:09:49.925457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.255 [2024-12-06 00:09:49.925489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:17.255 [2024-12-06 00:09:49.925499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.218 ms 00:32:17.255 [2024-12-06 00:09:49.925505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.255 [2024-12-06 00:09:49.925536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.255 [2024-12-06 00:09:49.925543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:17.255 [2024-12-06 00:09:49.925553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:17.255 [2024-12-06 00:09:49.925559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.255 [2024-12-06 00:09:49.925872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.255 [2024-12-06 00:09:49.925885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:17.255 [2024-12-06 00:09:49.925893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:32:17.255 [2024-12-06 00:09:49.925898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.255 [2024-12-06 00:09:49.926027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.256 [2024-12-06 00:09:49.926035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:17.256 [2024-12-06 00:09:49.926041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:32:17.256 [2024-12-06 00:09:49.926051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.256 [2024-12-06 00:09:49.936536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.256 [2024-12-06 00:09:49.936563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:17.256 [2024-12-06 00:09:49.936573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.469 ms 00:32:17.256 [2024-12-06 00:09:49.936579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.256 [2024-12-06 00:09:49.946328] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:17.256 [2024-12-06 00:09:49.946357] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:17.256 [2024-12-06 00:09:49.946367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.256 [2024-12-06 00:09:49.946374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:17.256 [2024-12-06 00:09:49.946381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.718 ms 00:32:17.256 [2024-12-06 00:09:49.946386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:49.964579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:49.964610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:17.517 [2024-12-06 00:09:49.964618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.162 ms 00:32:17.517 [2024-12-06 00:09:49.964625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:49.973546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:49.973573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:17.517 [2024-12-06 00:09:49.973581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.892 ms 00:32:17.517 [2024-12-06 00:09:49.973586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:49.982423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:49.982448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:17.517 [2024-12-06 00:09:49.982456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.810 ms 00:32:17.517 [2024-12-06 00:09:49.982462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:49.982912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:49.982926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:17.517 [2024-12-06 00:09:49.982934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:32:17.517 [2024-12-06 00:09:49.982941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:50.028122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:50.028348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:17.517 [2024-12-06 00:09:50.028370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.165 ms 00:32:17.517 [2024-12-06 00:09:50.028383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:50.036583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:17.517 [2024-12-06 00:09:50.038759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:50.038785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:17.517 [2024-12-06 00:09:50.038795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.336 ms 00:32:17.517 [2024-12-06 00:09:50.038802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:50.038875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:50.038884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:17.517 [2024-12-06 00:09:50.038892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:17.517 [2024-12-06 00:09:50.038898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:50.038950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:50.038958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:17.517 [2024-12-06 00:09:50.038981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:17.517 [2024-12-06 00:09:50.038988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:50.039003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:50.039010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:17.517 [2024-12-06 00:09:50.039017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:17.517 [2024-12-06 00:09:50.039023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:50.039049] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:17.517 [2024-12-06 00:09:50.039058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:50.039065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:17.517 [2024-12-06 00:09:50.039071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:17.517 [2024-12-06 00:09:50.039077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:50.057073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:50.057105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:17.517 [2024-12-06 00:09:50.057114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.982 ms 00:32:17.517 [2024-12-06 00:09:50.057124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:50.057181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.517 [2024-12-06 00:09:50.057188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:17.517 [2024-12-06 00:09:50.057195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:17.517 [2024-12-06 00:09:50.057201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.517 [2024-12-06 00:09:50.057980] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 210.376 ms, result 0 00:32:18.463  [2024-12-06T00:09:52.113Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-06T00:09:53.501Z] Copying: 41/1024 [MB] (16 MBps) [2024-12-06T00:09:54.074Z] Copying: 57/1024 [MB] (15 MBps) [2024-12-06T00:09:55.458Z] Copying: 73/1024 [MB] (16 MBps) [2024-12-06T00:09:56.398Z] Copying: 90/1024 [MB] (16 MBps) [2024-12-06T00:09:57.337Z] Copying: 125/1024 [MB] (35 MBps) [2024-12-06T00:09:58.278Z] Copying: 161/1024 [MB] (36 MBps) [2024-12-06T00:09:59.220Z] Copying: 180/1024 [MB] (18 MBps) [2024-12-06T00:10:00.165Z] Copying: 194/1024 [MB] (14 MBps) [2024-12-06T00:10:01.107Z] Copying: 209/1024 [MB] (14 MBps) [2024-12-06T00:10:02.492Z] Copying: 227/1024 [MB] (18 MBps) [2024-12-06T00:10:03.435Z] Copying: 238/1024 [MB] (10 MBps) [2024-12-06T00:10:04.421Z] Copying: 255/1024 [MB] (16 MBps) [2024-12-06T00:10:05.380Z] Copying: 267/1024 [MB] (12 MBps) [2024-12-06T00:10:06.319Z] Copying: 277/1024 [MB] (10 MBps) [2024-12-06T00:10:07.260Z] Copying: 296/1024 [MB] (19 MBps) [2024-12-06T00:10:08.205Z] Copying: 307/1024 [MB] (10 MBps) [2024-12-06T00:10:09.150Z] Copying: 321/1024 [MB] (13 MBps) [2024-12-06T00:10:10.094Z] Copying: 336/1024 [MB] (15 MBps) [2024-12-06T00:10:11.482Z] Copying: 373/1024 [MB] (36 MBps) [2024-12-06T00:10:12.424Z] Copying: 410/1024 [MB] (36 MBps) [2024-12-06T00:10:13.364Z] Copying: 439/1024 [MB] (29 MBps) [2024-12-06T00:10:14.306Z] Copying: 454/1024 [MB] (14 MBps) [2024-12-06T00:10:15.248Z] Copying: 468/1024 [MB] (14 MBps) [2024-12-06T00:10:16.187Z] Copying: 483/1024 [MB] (15 MBps) [2024-12-06T00:10:17.128Z] Copying: 494/1024 [MB] (10 MBps) [2024-12-06T00:10:18.509Z] Copying: 520/1024 [MB] (25 MBps) [2024-12-06T00:10:19.079Z] Copying: 556/1024 [MB] (36 MBps) [2024-12-06T00:10:20.464Z] Copying: 567/1024 [MB] (11 MBps) [2024-12-06T00:10:21.407Z] Copying: 577/1024 [MB] (10 MBps) [2024-12-06T00:10:22.354Z] Copying: 592/1024 [MB] (14 MBps) [2024-12-06T00:10:23.300Z] Copying: 609/1024 [MB] (16 MBps) [2024-12-06T00:10:24.247Z] Copying: 625/1024 [MB] (16 MBps) [2024-12-06T00:10:25.192Z] Copying: 637/1024 [MB] (11 MBps) [2024-12-06T00:10:26.138Z] Copying: 648/1024 [MB] (10 MBps) [2024-12-06T00:10:27.083Z] Copying: 659/1024 [MB] (11 MBps) [2024-12-06T00:10:28.470Z] Copying: 670/1024 [MB] (10 MBps) [2024-12-06T00:10:29.412Z] Copying: 680/1024 [MB] (10 MBps) [2024-12-06T00:10:30.356Z] Copying: 694/1024 [MB] (13 MBps) [2024-12-06T00:10:31.302Z] Copying: 718/1024 [MB] (24 MBps) [2024-12-06T00:10:32.248Z] Copying: 737/1024 [MB] (19 MBps) [2024-12-06T00:10:33.192Z] Copying: 759/1024 [MB] (21 MBps) [2024-12-06T00:10:34.136Z] Copying: 769/1024 [MB] (10 MBps) [2024-12-06T00:10:35.082Z] Copying: 780/1024 [MB] (10 MBps) [2024-12-06T00:10:36.470Z] Copying: 792/1024 [MB] (11 MBps) [2024-12-06T00:10:37.104Z] Copying: 832/1024 [MB] (40 MBps) [2024-12-06T00:10:38.492Z] Copying: 861/1024 [MB] (29 MBps) [2024-12-06T00:10:39.436Z] Copying: 876/1024 [MB] (14 MBps) [2024-12-06T00:10:40.379Z] Copying: 890/1024 [MB] (14 MBps) [2024-12-06T00:10:41.321Z] Copying: 910/1024 [MB] (19 MBps) [2024-12-06T00:10:42.260Z] Copying: 928/1024 [MB] (18 MBps) [2024-12-06T00:10:43.217Z] Copying: 941/1024 [MB] (13 MBps) [2024-12-06T00:10:44.158Z] Copying: 955/1024 [MB] (14 MBps) [2024-12-06T00:10:45.101Z] Copying: 967/1024 [MB] (11 MBps) [2024-12-06T00:10:46.485Z] Copying: 984/1024 [MB] (16 MBps) [2024-12-06T00:10:47.427Z] Copying: 994/1024 [MB] (10 MBps) [2024-12-06T00:10:48.374Z] Copying: 1009/1024 [MB] (15 MBps) [2024-12-06T00:10:48.374Z] Copying: 1020/1024 [MB] (10 MBps) [2024-12-06T00:10:48.374Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-06 00:10:48.329369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.665 [2024-12-06 00:10:48.329410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:15.665 [2024-12-06 00:10:48.329421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:15.665 [2024-12-06 00:10:48.329428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.665 [2024-12-06 00:10:48.329444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:15.665 [2024-12-06 00:10:48.331626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.665 [2024-12-06 00:10:48.331655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:15.665 [2024-12-06 00:10:48.331669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:33:15.665 [2024-12-06 00:10:48.331675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.665 [2024-12-06 00:10:48.333139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.665 [2024-12-06 00:10:48.333168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:15.665 [2024-12-06 00:10:48.333176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.448 ms 00:33:15.665 [2024-12-06 00:10:48.333182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.665 [2024-12-06 00:10:48.333202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.665 [2024-12-06 00:10:48.333209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:15.665 [2024-12-06 00:10:48.333215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:15.665 [2024-12-06 00:10:48.333221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.665 [2024-12-06 00:10:48.333261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.665 [2024-12-06 00:10:48.333268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:15.665 [2024-12-06 00:10:48.333274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:15.665 [2024-12-06 00:10:48.333280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.665 [2024-12-06 00:10:48.333290] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:15.665 [2024-12-06 00:10:48.333301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:15.665 [2024-12-06 00:10:48.333467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:15.666 [2024-12-06 00:10:48.333895] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:15.666 [2024-12-06 00:10:48.333901] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4fb7d2de-2db9-4360-a868-f6ce287ca9bb 00:33:15.666 [2024-12-06 00:10:48.333907] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:15.666 [2024-12-06 00:10:48.333912] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:15.666 [2024-12-06 00:10:48.333918] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:15.666 [2024-12-06 00:10:48.333928] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:15.666 [2024-12-06 00:10:48.333933] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:15.666 [2024-12-06 00:10:48.333939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:15.666 [2024-12-06 00:10:48.333945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:15.666 [2024-12-06 00:10:48.333950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:15.666 [2024-12-06 00:10:48.333956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:15.666 [2024-12-06 00:10:48.333961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.666 [2024-12-06 00:10:48.333977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:15.666 [2024-12-06 00:10:48.333984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:33:15.666 [2024-12-06 00:10:48.333989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.666 [2024-12-06 00:10:48.343925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.666 [2024-12-06 00:10:48.343957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:15.666 [2024-12-06 00:10:48.343977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.925 ms 00:33:15.666 [2024-12-06 00:10:48.343984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.666 [2024-12-06 00:10:48.344273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:15.666 [2024-12-06 00:10:48.344281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:15.666 [2024-12-06 00:10:48.344288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:33:15.667 [2024-12-06 00:10:48.344293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.370032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.370058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:15.928 [2024-12-06 00:10:48.370067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.370072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.370114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.370120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:15.928 [2024-12-06 00:10:48.370126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.370132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.370178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.370188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:15.928 [2024-12-06 00:10:48.370194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.370200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.370211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.370217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:15.928 [2024-12-06 00:10:48.370225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.370231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.428568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.428600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:15.928 [2024-12-06 00:10:48.428608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.428614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.476609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.476640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:15.928 [2024-12-06 00:10:48.476648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.476655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.476691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.476698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:15.928 [2024-12-06 00:10:48.476707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.476714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.476752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.476759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:15.928 [2024-12-06 00:10:48.476765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.476771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.476825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.476832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:15.928 [2024-12-06 00:10:48.476843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.476850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.476868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.476874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:15.928 [2024-12-06 00:10:48.476880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.476886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.476911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.476917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:15.928 [2024-12-06 00:10:48.476923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.476930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.476961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:15.928 [2024-12-06 00:10:48.476997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:15.928 [2024-12-06 00:10:48.477003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:15.928 [2024-12-06 00:10:48.477009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:15.928 [2024-12-06 00:10:48.477096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 147.705 ms, result 0 00:33:16.534 00:33:16.534 00:33:16.534 00:10:49 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:33:16.534 [2024-12-06 00:10:49.101909] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:33:16.534 [2024-12-06 00:10:49.102126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85453 ] 00:33:16.795 [2024-12-06 00:10:49.250040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.795 [2024-12-06 00:10:49.325704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.058 [2024-12-06 00:10:49.534896] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:17.058 [2024-12-06 00:10:49.534947] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:17.058 [2024-12-06 00:10:49.685875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.685914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:17.058 [2024-12-06 00:10:49.685924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:17.058 [2024-12-06 00:10:49.685930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.685963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.685986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:17.058 [2024-12-06 00:10:49.685993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:33:17.058 [2024-12-06 00:10:49.685999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.686012] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:17.058 [2024-12-06 00:10:49.686512] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:17.058 [2024-12-06 00:10:49.686524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.686530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:17.058 [2024-12-06 00:10:49.686536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:33:17.058 [2024-12-06 00:10:49.686542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.686742] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:17.058 [2024-12-06 00:10:49.686759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.686767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:17.058 [2024-12-06 00:10:49.686774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:17.058 [2024-12-06 00:10:49.686779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.686811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.686817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:17.058 [2024-12-06 00:10:49.686823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:17.058 [2024-12-06 00:10:49.686829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.687029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.687037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:17.058 [2024-12-06 00:10:49.687043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:33:17.058 [2024-12-06 00:10:49.687049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.687097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.687103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:17.058 [2024-12-06 00:10:49.687109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:17.058 [2024-12-06 00:10:49.687114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.687129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.687135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:17.058 [2024-12-06 00:10:49.687143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:17.058 [2024-12-06 00:10:49.687149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.687161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:17.058 [2024-12-06 00:10:49.689997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.690021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:17.058 [2024-12-06 00:10:49.690028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:33:17.058 [2024-12-06 00:10:49.690034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.690059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.690065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:17.058 [2024-12-06 00:10:49.690071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:17.058 [2024-12-06 00:10:49.690076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.690105] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:17.058 [2024-12-06 00:10:49.690121] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:17.058 [2024-12-06 00:10:49.690148] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:17.058 [2024-12-06 00:10:49.690160] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:17.058 [2024-12-06 00:10:49.690238] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:17.058 [2024-12-06 00:10:49.690246] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:17.058 [2024-12-06 00:10:49.690254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:17.058 [2024-12-06 00:10:49.690262] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:17.058 [2024-12-06 00:10:49.690269] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:17.058 [2024-12-06 00:10:49.690277] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:17.058 [2024-12-06 00:10:49.690282] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:17.058 [2024-12-06 00:10:49.690288] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:17.058 [2024-12-06 00:10:49.690293] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:17.058 [2024-12-06 00:10:49.690298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.690304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:17.058 [2024-12-06 00:10:49.690310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:33:17.058 [2024-12-06 00:10:49.690315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.690377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.058 [2024-12-06 00:10:49.690383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:17.058 [2024-12-06 00:10:49.690388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:17.058 [2024-12-06 00:10:49.690395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.058 [2024-12-06 00:10:49.690469] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:17.058 [2024-12-06 00:10:49.690476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:17.058 [2024-12-06 00:10:49.690482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:17.058 [2024-12-06 00:10:49.690488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.058 [2024-12-06 00:10:49.690494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:17.058 [2024-12-06 00:10:49.690499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:17.058 [2024-12-06 00:10:49.690504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:17.058 [2024-12-06 00:10:49.690510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:17.058 [2024-12-06 00:10:49.690515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:17.058 [2024-12-06 00:10:49.690520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:17.058 [2024-12-06 00:10:49.690525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:17.058 [2024-12-06 00:10:49.690530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:17.058 [2024-12-06 00:10:49.690535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:17.058 [2024-12-06 00:10:49.690540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:17.058 [2024-12-06 00:10:49.690546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:17.058 [2024-12-06 00:10:49.690554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.058 [2024-12-06 00:10:49.690559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:17.058 [2024-12-06 00:10:49.690564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:17.058 [2024-12-06 00:10:49.690569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.058 [2024-12-06 00:10:49.690574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:17.058 [2024-12-06 00:10:49.690579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:17.058 [2024-12-06 00:10:49.690584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.058 [2024-12-06 00:10:49.690589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:17.058 [2024-12-06 00:10:49.690594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:17.058 [2024-12-06 00:10:49.690599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.058 [2024-12-06 00:10:49.690603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:17.058 [2024-12-06 00:10:49.690608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:17.058 [2024-12-06 00:10:49.690613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.058 [2024-12-06 00:10:49.690617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:17.058 [2024-12-06 00:10:49.690622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:17.058 [2024-12-06 00:10:49.690627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.058 [2024-12-06 00:10:49.690632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:17.059 [2024-12-06 00:10:49.690637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:17.059 [2024-12-06 00:10:49.690641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:17.059 [2024-12-06 00:10:49.690647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:17.059 [2024-12-06 00:10:49.690651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:17.059 [2024-12-06 00:10:49.690656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:17.059 [2024-12-06 00:10:49.690661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:17.059 [2024-12-06 00:10:49.690666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:17.059 [2024-12-06 00:10:49.690670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.059 [2024-12-06 00:10:49.690675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:17.059 [2024-12-06 00:10:49.690680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:17.059 [2024-12-06 00:10:49.690685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.059 [2024-12-06 00:10:49.690690] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:17.059 [2024-12-06 00:10:49.690696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:17.059 [2024-12-06 00:10:49.690701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:17.059 [2024-12-06 00:10:49.690707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.059 [2024-12-06 00:10:49.690714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:17.059 [2024-12-06 00:10:49.690719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:17.059 [2024-12-06 00:10:49.690724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:17.059 [2024-12-06 00:10:49.690729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:17.059 [2024-12-06 00:10:49.690733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:17.059 [2024-12-06 00:10:49.690738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:17.059 [2024-12-06 00:10:49.690744] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:17.059 [2024-12-06 00:10:49.690751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:17.059 [2024-12-06 00:10:49.690757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:17.059 [2024-12-06 00:10:49.690763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:17.059 [2024-12-06 00:10:49.690768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:17.059 [2024-12-06 00:10:49.690773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:17.059 [2024-12-06 00:10:49.690778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:17.059 [2024-12-06 00:10:49.690784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:17.059 [2024-12-06 00:10:49.690789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:17.059 [2024-12-06 00:10:49.690794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:17.059 [2024-12-06 00:10:49.690799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:17.059 [2024-12-06 00:10:49.690805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:17.059 [2024-12-06 00:10:49.690810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:17.059 [2024-12-06 00:10:49.690815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:17.059 [2024-12-06 00:10:49.690820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:17.059 [2024-12-06 00:10:49.690826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:17.059 [2024-12-06 00:10:49.690831] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:17.059 [2024-12-06 00:10:49.690837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:17.059 [2024-12-06 00:10:49.690843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:17.059 [2024-12-06 00:10:49.690849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:17.059 [2024-12-06 00:10:49.690854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:17.059 [2024-12-06 00:10:49.690860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:17.059 [2024-12-06 00:10:49.690865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.059 [2024-12-06 00:10:49.690870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:17.059 [2024-12-06 00:10:49.690876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:33:17.059 [2024-12-06 00:10:49.690882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.059 [2024-12-06 00:10:49.709206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.059 [2024-12-06 00:10:49.709232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:17.059 [2024-12-06 00:10:49.709239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.294 ms 00:33:17.059 [2024-12-06 00:10:49.709244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.059 [2024-12-06 00:10:49.709305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.059 [2024-12-06 00:10:49.709311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:17.059 [2024-12-06 00:10:49.709319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:17.059 [2024-12-06 00:10:49.709324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.059 [2024-12-06 00:10:49.748074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.059 [2024-12-06 00:10:49.748173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:17.059 [2024-12-06 00:10:49.748187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.713 ms 00:33:17.059 [2024-12-06 00:10:49.748193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.059 [2024-12-06 00:10:49.748227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.059 [2024-12-06 00:10:49.748241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:17.059 [2024-12-06 00:10:49.748248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:17.059 [2024-12-06 00:10:49.748254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.059 [2024-12-06 00:10:49.748326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.059 [2024-12-06 00:10:49.748334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:17.059 [2024-12-06 00:10:49.748340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:17.059 [2024-12-06 00:10:49.748346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.059 [2024-12-06 00:10:49.748433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.059 [2024-12-06 00:10:49.748441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:17.059 [2024-12-06 00:10:49.748447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:33:17.059 [2024-12-06 00:10:49.748453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.059 [2024-12-06 00:10:49.758821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.059 [2024-12-06 00:10:49.758848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:17.059 [2024-12-06 00:10:49.758856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.354 ms 00:33:17.059 [2024-12-06 00:10:49.758862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.059 [2024-12-06 00:10:49.758945] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:17.059 [2024-12-06 00:10:49.758955] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:17.059 [2024-12-06 00:10:49.758962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.059 [2024-12-06 00:10:49.758991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:17.059 [2024-12-06 00:10:49.758997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:17.059 [2024-12-06 00:10:49.759003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.321 [2024-12-06 00:10:49.768167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.321 [2024-12-06 00:10:49.768190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:17.321 [2024-12-06 00:10:49.768198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.153 ms 00:33:17.321 [2024-12-06 00:10:49.768204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.321 [2024-12-06 00:10:49.768297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.768304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:17.322 [2024-12-06 00:10:49.768310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:33:17.322 [2024-12-06 00:10:49.768318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.768342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.768348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:17.322 [2024-12-06 00:10:49.768359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:17.322 [2024-12-06 00:10:49.768364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.768791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.768803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:17.322 [2024-12-06 00:10:49.768809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:33:17.322 [2024-12-06 00:10:49.768815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.768828] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:17.322 [2024-12-06 00:10:49.768835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.768841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:17.322 [2024-12-06 00:10:49.768846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:17.322 [2024-12-06 00:10:49.768852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.777337] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:17.322 [2024-12-06 00:10:49.777438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.777446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:17.322 [2024-12-06 00:10:49.777452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.573 ms 00:33:17.322 [2024-12-06 00:10:49.777458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.779045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.779066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:17.322 [2024-12-06 00:10:49.779072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:33:17.322 [2024-12-06 00:10:49.779078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.779136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.779144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:17.322 [2024-12-06 00:10:49.779150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:33:17.322 [2024-12-06 00:10:49.779156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.779183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.779192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:17.322 [2024-12-06 00:10:49.779199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:17.322 [2024-12-06 00:10:49.779204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.779224] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:17.322 [2024-12-06 00:10:49.779231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.779236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:17.322 [2024-12-06 00:10:49.779242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:17.322 [2024-12-06 00:10:49.779247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.797475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.797502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:17.322 [2024-12-06 00:10:49.797510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.212 ms 00:33:17.322 [2024-12-06 00:10:49.797516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.797568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.322 [2024-12-06 00:10:49.797575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:17.322 [2024-12-06 00:10:49.797581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:33:17.322 [2024-12-06 00:10:49.797587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.322 [2024-12-06 00:10:49.798281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.068 ms, result 0 00:33:18.266  [2024-12-06T00:10:52.365Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-06T00:10:52.940Z] Copying: 34/1024 [MB] (13 MBps) [2024-12-06T00:10:54.328Z] Copying: 52/1024 [MB] (18 MBps) [2024-12-06T00:10:55.270Z] Copying: 65/1024 [MB] (12 MBps) [2024-12-06T00:10:56.210Z] Copying: 77/1024 [MB] (12 MBps) [2024-12-06T00:10:57.152Z] Copying: 94/1024 [MB] (16 MBps) [2024-12-06T00:10:58.094Z] Copying: 114/1024 [MB] (19 MBps) [2024-12-06T00:10:59.038Z] Copying: 135/1024 [MB] (21 MBps) [2024-12-06T00:10:59.981Z] Copying: 145/1024 [MB] (10 MBps) [2024-12-06T00:11:01.368Z] Copying: 158/1024 [MB] (12 MBps) [2024-12-06T00:11:01.941Z] Copying: 168/1024 [MB] (10 MBps) [2024-12-06T00:11:03.329Z] Copying: 178/1024 [MB] (10 MBps) [2024-12-06T00:11:04.273Z] Copying: 192/1024 [MB] (13 MBps) [2024-12-06T00:11:05.217Z] Copying: 205/1024 [MB] (12 MBps) [2024-12-06T00:11:06.156Z] Copying: 216/1024 [MB] (10 MBps) [2024-12-06T00:11:07.097Z] Copying: 227/1024 [MB] (10 MBps) [2024-12-06T00:11:08.108Z] Copying: 237/1024 [MB] (10 MBps) [2024-12-06T00:11:09.075Z] Copying: 247/1024 [MB] (10 MBps) [2024-12-06T00:11:10.019Z] Copying: 258/1024 [MB] (10 MBps) [2024-12-06T00:11:10.962Z] Copying: 268/1024 [MB] (10 MBps) [2024-12-06T00:11:11.954Z] Copying: 280/1024 [MB] (11 MBps) [2024-12-06T00:11:13.342Z] Copying: 291/1024 [MB] (11 MBps) [2024-12-06T00:11:14.285Z] Copying: 302/1024 [MB] (10 MBps) [2024-12-06T00:11:15.241Z] Copying: 312/1024 [MB] (10 MBps) [2024-12-06T00:11:16.202Z] Copying: 322/1024 [MB] (10 MBps) [2024-12-06T00:11:17.144Z] Copying: 333/1024 [MB] (10 MBps) [2024-12-06T00:11:18.089Z] Copying: 343/1024 [MB] (10 MBps) [2024-12-06T00:11:19.034Z] Copying: 359/1024 [MB] (15 MBps) [2024-12-06T00:11:19.978Z] Copying: 370/1024 [MB] (10 MBps) [2024-12-06T00:11:21.375Z] Copying: 383/1024 [MB] (13 MBps) [2024-12-06T00:11:21.948Z] Copying: 400/1024 [MB] (17 MBps) [2024-12-06T00:11:23.335Z] Copying: 416/1024 [MB] (15 MBps) [2024-12-06T00:11:24.280Z] Copying: 435/1024 [MB] (18 MBps) [2024-12-06T00:11:25.221Z] Copying: 449/1024 [MB] (14 MBps) [2024-12-06T00:11:26.164Z] Copying: 467/1024 [MB] (18 MBps) [2024-12-06T00:11:27.111Z] Copying: 485/1024 [MB] (17 MBps) [2024-12-06T00:11:28.056Z] Copying: 505/1024 [MB] (20 MBps) [2024-12-06T00:11:29.002Z] Copying: 527/1024 [MB] (21 MBps) [2024-12-06T00:11:29.947Z] Copying: 545/1024 [MB] (17 MBps) [2024-12-06T00:11:31.336Z] Copying: 564/1024 [MB] (19 MBps) [2024-12-06T00:11:32.281Z] Copying: 583/1024 [MB] (18 MBps) [2024-12-06T00:11:33.227Z] Copying: 593/1024 [MB] (10 MBps) [2024-12-06T00:11:34.173Z] Copying: 604/1024 [MB] (10 MBps) [2024-12-06T00:11:35.119Z] Copying: 616/1024 [MB] (11 MBps) [2024-12-06T00:11:36.059Z] Copying: 632/1024 [MB] (16 MBps) [2024-12-06T00:11:37.003Z] Copying: 647/1024 [MB] (14 MBps) [2024-12-06T00:11:37.949Z] Copying: 658/1024 [MB] (11 MBps) [2024-12-06T00:11:39.338Z] Copying: 668/1024 [MB] (10 MBps) [2024-12-06T00:11:40.311Z] Copying: 679/1024 [MB] (10 MBps) [2024-12-06T00:11:40.957Z] Copying: 699/1024 [MB] (19 MBps) [2024-12-06T00:11:42.344Z] Copying: 709/1024 [MB] (10 MBps) [2024-12-06T00:11:43.285Z] Copying: 720/1024 [MB] (10 MBps) [2024-12-06T00:11:44.229Z] Copying: 735/1024 [MB] (15 MBps) [2024-12-06T00:11:45.175Z] Copying: 748/1024 [MB] (12 MBps) [2024-12-06T00:11:46.118Z] Copying: 758/1024 [MB] (10 MBps) [2024-12-06T00:11:47.063Z] Copying: 778/1024 [MB] (19 MBps) [2024-12-06T00:11:48.008Z] Copying: 789/1024 [MB] (10 MBps) [2024-12-06T00:11:48.953Z] Copying: 799/1024 [MB] (10 MBps) [2024-12-06T00:11:50.341Z] Copying: 811/1024 [MB] (11 MBps) [2024-12-06T00:11:51.285Z] Copying: 831/1024 [MB] (20 MBps) [2024-12-06T00:11:52.231Z] Copying: 851/1024 [MB] (20 MBps) [2024-12-06T00:11:53.192Z] Copying: 863/1024 [MB] (11 MBps) [2024-12-06T00:11:54.136Z] Copying: 875/1024 [MB] (12 MBps) [2024-12-06T00:11:55.080Z] Copying: 886/1024 [MB] (10 MBps) [2024-12-06T00:11:56.023Z] Copying: 897/1024 [MB] (10 MBps) [2024-12-06T00:11:56.965Z] Copying: 909/1024 [MB] (12 MBps) [2024-12-06T00:11:58.352Z] Copying: 920/1024 [MB] (11 MBps) [2024-12-06T00:11:59.297Z] Copying: 934/1024 [MB] (14 MBps) [2024-12-06T00:12:00.242Z] Copying: 950/1024 [MB] (15 MBps) [2024-12-06T00:12:01.184Z] Copying: 960/1024 [MB] (10 MBps) [2024-12-06T00:12:02.128Z] Copying: 980/1024 [MB] (19 MBps) [2024-12-06T00:12:03.073Z] Copying: 997/1024 [MB] (16 MBps) [2024-12-06T00:12:03.647Z] Copying: 1016/1024 [MB] (18 MBps) [2024-12-06T00:12:03.647Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-06 00:12:03.553239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.938 [2024-12-06 00:12:03.553325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:30.938 [2024-12-06 00:12:03.553342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:30.938 [2024-12-06 00:12:03.553352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.938 [2024-12-06 00:12:03.553382] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:30.938 [2024-12-06 00:12:03.556552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.938 [2024-12-06 00:12:03.556595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:30.938 [2024-12-06 00:12:03.556609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.151 ms 00:34:30.938 [2024-12-06 00:12:03.556619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.938 [2024-12-06 00:12:03.556864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.938 [2024-12-06 00:12:03.556875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:30.938 [2024-12-06 00:12:03.556884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:34:30.938 [2024-12-06 00:12:03.556893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.938 [2024-12-06 00:12:03.556926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.938 [2024-12-06 00:12:03.556936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:30.938 [2024-12-06 00:12:03.556945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:30.938 [2024-12-06 00:12:03.556953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.938 [2024-12-06 00:12:03.557026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.938 [2024-12-06 00:12:03.557037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:30.938 [2024-12-06 00:12:03.557046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:34:30.938 [2024-12-06 00:12:03.557055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.938 [2024-12-06 00:12:03.557069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:30.938 [2024-12-06 00:12:03.557083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:30.938 [2024-12-06 00:12:03.557263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:30.939 [2024-12-06 00:12:03.557890] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:30.939 [2024-12-06 00:12:03.557898] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4fb7d2de-2db9-4360-a868-f6ce287ca9bb 00:34:30.939 [2024-12-06 00:12:03.557906] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:30.939 [2024-12-06 00:12:03.557914] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:34:30.939 [2024-12-06 00:12:03.557921] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:30.939 [2024-12-06 00:12:03.557930] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:30.939 [2024-12-06 00:12:03.557937] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:30.939 [2024-12-06 00:12:03.557957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:30.939 [2024-12-06 00:12:03.557981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:30.939 [2024-12-06 00:12:03.557989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:30.939 [2024-12-06 00:12:03.557996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:30.939 [2024-12-06 00:12:03.558004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.939 [2024-12-06 00:12:03.558011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:30.939 [2024-12-06 00:12:03.558020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:34:30.939 [2024-12-06 00:12:03.558030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.940 [2024-12-06 00:12:03.572453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.940 [2024-12-06 00:12:03.572496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:30.940 [2024-12-06 00:12:03.572508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.405 ms 00:34:30.940 [2024-12-06 00:12:03.572516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.940 [2024-12-06 00:12:03.572908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.940 [2024-12-06 00:12:03.573058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:30.940 [2024-12-06 00:12:03.573075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:34:30.940 [2024-12-06 00:12:03.573084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.940 [2024-12-06 00:12:03.609559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.940 [2024-12-06 00:12:03.609601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:30.940 [2024-12-06 00:12:03.609615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.940 [2024-12-06 00:12:03.609625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.940 [2024-12-06 00:12:03.609701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.940 [2024-12-06 00:12:03.609712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:30.940 [2024-12-06 00:12:03.609729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.940 [2024-12-06 00:12:03.609738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.940 [2024-12-06 00:12:03.609802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.940 [2024-12-06 00:12:03.609814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:30.940 [2024-12-06 00:12:03.609824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.940 [2024-12-06 00:12:03.609833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.940 [2024-12-06 00:12:03.609851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.940 [2024-12-06 00:12:03.609860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:30.940 [2024-12-06 00:12:03.609869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.940 [2024-12-06 00:12:03.609887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.202 [2024-12-06 00:12:03.696649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.202 [2024-12-06 00:12:03.696713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:31.202 [2024-12-06 00:12:03.696727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.202 [2024-12-06 00:12:03.696736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.202 [2024-12-06 00:12:03.765244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.202 [2024-12-06 00:12:03.765303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:31.202 [2024-12-06 00:12:03.765315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.202 [2024-12-06 00:12:03.765331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.202 [2024-12-06 00:12:03.765424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.202 [2024-12-06 00:12:03.765435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:31.202 [2024-12-06 00:12:03.765444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.202 [2024-12-06 00:12:03.765452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.202 [2024-12-06 00:12:03.765493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.202 [2024-12-06 00:12:03.765503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:31.202 [2024-12-06 00:12:03.765512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.202 [2024-12-06 00:12:03.765519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.202 [2024-12-06 00:12:03.765601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.202 [2024-12-06 00:12:03.765612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:31.202 [2024-12-06 00:12:03.765620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.202 [2024-12-06 00:12:03.765629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.202 [2024-12-06 00:12:03.765655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.202 [2024-12-06 00:12:03.765665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:31.202 [2024-12-06 00:12:03.765674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.202 [2024-12-06 00:12:03.765681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.202 [2024-12-06 00:12:03.765724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.202 [2024-12-06 00:12:03.765733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:31.202 [2024-12-06 00:12:03.765741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.202 [2024-12-06 00:12:03.765750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.202 [2024-12-06 00:12:03.765793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.202 [2024-12-06 00:12:03.765803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:31.202 [2024-12-06 00:12:03.765811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.202 [2024-12-06 00:12:03.765824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.202 [2024-12-06 00:12:03.765983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 212.693 ms, result 0 00:34:32.147 00:34:32.147 00:34:32.147 00:12:04 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:34.694 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:34.694 00:12:06 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:34.694 [2024-12-06 00:12:06.865317] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:34:34.694 [2024-12-06 00:12:06.865470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86226 ] 00:34:34.694 [2024-12-06 00:12:07.027805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.694 [2024-12-06 00:12:07.148949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.957 [2024-12-06 00:12:07.442559] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:34.957 [2024-12-06 00:12:07.442649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:34.957 [2024-12-06 00:12:07.601944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.602026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:34.957 [2024-12-06 00:12:07.602042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:34.957 [2024-12-06 00:12:07.602051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.602103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.602116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:34.957 [2024-12-06 00:12:07.602125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:34.957 [2024-12-06 00:12:07.602133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.602154] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:34.957 [2024-12-06 00:12:07.602825] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:34.957 [2024-12-06 00:12:07.602842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.602851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:34.957 [2024-12-06 00:12:07.602860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:34:34.957 [2024-12-06 00:12:07.602868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.603298] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:34.957 [2024-12-06 00:12:07.603371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.603384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:34.957 [2024-12-06 00:12:07.603395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:34:34.957 [2024-12-06 00:12:07.603403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.603491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.603502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:34.957 [2024-12-06 00:12:07.603511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:34.957 [2024-12-06 00:12:07.603519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.603808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.603820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:34.957 [2024-12-06 00:12:07.603829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:34:34.957 [2024-12-06 00:12:07.603837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.603906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.603915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:34.957 [2024-12-06 00:12:07.603923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:34.957 [2024-12-06 00:12:07.603931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.603952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.603960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:34.957 [2024-12-06 00:12:07.604005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:34.957 [2024-12-06 00:12:07.604013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.604035] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:34.957 [2024-12-06 00:12:07.608287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.608356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:34.957 [2024-12-06 00:12:07.608368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.257 ms 00:34:34.957 [2024-12-06 00:12:07.608376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.608420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.608431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:34.957 [2024-12-06 00:12:07.608440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:34.957 [2024-12-06 00:12:07.608449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.608501] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:34.957 [2024-12-06 00:12:07.608524] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:34.957 [2024-12-06 00:12:07.608564] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:34.957 [2024-12-06 00:12:07.608580] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:34.957 [2024-12-06 00:12:07.608685] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:34.957 [2024-12-06 00:12:07.608696] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:34.957 [2024-12-06 00:12:07.608707] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:34.957 [2024-12-06 00:12:07.608718] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:34.957 [2024-12-06 00:12:07.608728] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:34.957 [2024-12-06 00:12:07.608739] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:34.957 [2024-12-06 00:12:07.608747] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:34.957 [2024-12-06 00:12:07.608755] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:34.957 [2024-12-06 00:12:07.608762] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:34.957 [2024-12-06 00:12:07.608769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.608778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:34.957 [2024-12-06 00:12:07.608786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:34:34.957 [2024-12-06 00:12:07.608794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.608876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.957 [2024-12-06 00:12:07.608885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:34.957 [2024-12-06 00:12:07.608892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:34.957 [2024-12-06 00:12:07.608902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.957 [2024-12-06 00:12:07.609021] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:34.957 [2024-12-06 00:12:07.609033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:34.957 [2024-12-06 00:12:07.609042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:34.957 [2024-12-06 00:12:07.609051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:34.957 [2024-12-06 00:12:07.609059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:34.958 [2024-12-06 00:12:07.609066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:34.958 [2024-12-06 00:12:07.609081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:34.958 [2024-12-06 00:12:07.609088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:34.958 [2024-12-06 00:12:07.609102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:34.958 [2024-12-06 00:12:07.609112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:34.958 [2024-12-06 00:12:07.609120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:34.958 [2024-12-06 00:12:07.609126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:34.958 [2024-12-06 00:12:07.609134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:34.958 [2024-12-06 00:12:07.609147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:34.958 [2024-12-06 00:12:07.609160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:34.958 [2024-12-06 00:12:07.609166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:34.958 [2024-12-06 00:12:07.609179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:34.958 [2024-12-06 00:12:07.609194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:34.958 [2024-12-06 00:12:07.609200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:34.958 [2024-12-06 00:12:07.609213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:34.958 [2024-12-06 00:12:07.609219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:34.958 [2024-12-06 00:12:07.609231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:34.958 [2024-12-06 00:12:07.609238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:34.958 [2024-12-06 00:12:07.609252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:34.958 [2024-12-06 00:12:07.609259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:34.958 [2024-12-06 00:12:07.609273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:34.958 [2024-12-06 00:12:07.609280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:34.958 [2024-12-06 00:12:07.609287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:34.958 [2024-12-06 00:12:07.609293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:34.958 [2024-12-06 00:12:07.609299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:34.958 [2024-12-06 00:12:07.609306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:34.958 [2024-12-06 00:12:07.609319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:34.958 [2024-12-06 00:12:07.609324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609332] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:34.958 [2024-12-06 00:12:07.609341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:34.958 [2024-12-06 00:12:07.609348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:34.958 [2024-12-06 00:12:07.609356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:34.958 [2024-12-06 00:12:07.609366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:34.958 [2024-12-06 00:12:07.609373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:34.958 [2024-12-06 00:12:07.609379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:34.958 [2024-12-06 00:12:07.609386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:34.958 [2024-12-06 00:12:07.609393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:34.958 [2024-12-06 00:12:07.609399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:34.958 [2024-12-06 00:12:07.609407] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:34.958 [2024-12-06 00:12:07.609416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:34.958 [2024-12-06 00:12:07.609425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:34.958 [2024-12-06 00:12:07.609432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:34.958 [2024-12-06 00:12:07.609440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:34.958 [2024-12-06 00:12:07.609448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:34.958 [2024-12-06 00:12:07.609454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:34.958 [2024-12-06 00:12:07.609461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:34.958 [2024-12-06 00:12:07.609468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:34.958 [2024-12-06 00:12:07.609476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:34.958 [2024-12-06 00:12:07.609483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:34.958 [2024-12-06 00:12:07.609489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:34.958 [2024-12-06 00:12:07.609496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:34.958 [2024-12-06 00:12:07.609503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:34.958 [2024-12-06 00:12:07.609510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:34.958 [2024-12-06 00:12:07.609517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:34.958 [2024-12-06 00:12:07.609524] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:34.958 [2024-12-06 00:12:07.609532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:34.958 [2024-12-06 00:12:07.609539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:34.958 [2024-12-06 00:12:07.609546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:34.958 [2024-12-06 00:12:07.609554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:34.959 [2024-12-06 00:12:07.609561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:34.959 [2024-12-06 00:12:07.609570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.959 [2024-12-06 00:12:07.609578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:34.959 [2024-12-06 00:12:07.609586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:34:34.959 [2024-12-06 00:12:07.609594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.959 [2024-12-06 00:12:07.636996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.959 [2024-12-06 00:12:07.637040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:34.959 [2024-12-06 00:12:07.637052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.360 ms 00:34:34.959 [2024-12-06 00:12:07.637060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.959 [2024-12-06 00:12:07.637148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.959 [2024-12-06 00:12:07.637158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:34.959 [2024-12-06 00:12:07.637171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:34:34.959 [2024-12-06 00:12:07.637179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.220 [2024-12-06 00:12:07.684361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.220 [2024-12-06 00:12:07.684429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:35.220 [2024-12-06 00:12:07.684443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.124 ms 00:34:35.220 [2024-12-06 00:12:07.684451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.220 [2024-12-06 00:12:07.684502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.220 [2024-12-06 00:12:07.684514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:35.221 [2024-12-06 00:12:07.684523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:35.221 [2024-12-06 00:12:07.684532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.684643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.684656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:35.221 [2024-12-06 00:12:07.684666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:34:35.221 [2024-12-06 00:12:07.684674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.684804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.684817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:35.221 [2024-12-06 00:12:07.684827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:34:35.221 [2024-12-06 00:12:07.684835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.700289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.700341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:35.221 [2024-12-06 00:12:07.700354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.433 ms 00:34:35.221 [2024-12-06 00:12:07.700362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.700512] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:35.221 [2024-12-06 00:12:07.700527] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:35.221 [2024-12-06 00:12:07.700541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.700549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:35.221 [2024-12-06 00:12:07.700558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:35.221 [2024-12-06 00:12:07.700566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.712855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.712897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:35.221 [2024-12-06 00:12:07.712908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.272 ms 00:34:35.221 [2024-12-06 00:12:07.712916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.713060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.713071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:35.221 [2024-12-06 00:12:07.713081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:34:35.221 [2024-12-06 00:12:07.713093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.713142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.713151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:35.221 [2024-12-06 00:12:07.713168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:35.221 [2024-12-06 00:12:07.713175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.713740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.713751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:35.221 [2024-12-06 00:12:07.713763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:34:35.221 [2024-12-06 00:12:07.713770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.713790] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:35.221 [2024-12-06 00:12:07.713800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.713807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:35.221 [2024-12-06 00:12:07.713815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:35.221 [2024-12-06 00:12:07.713822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.726330] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:35.221 [2024-12-06 00:12:07.726486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.726496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:35.221 [2024-12-06 00:12:07.726507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.647 ms 00:34:35.221 [2024-12-06 00:12:07.726515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.728805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.728839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:35.221 [2024-12-06 00:12:07.728848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.266 ms 00:34:35.221 [2024-12-06 00:12:07.728856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.728948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.728958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:35.221 [2024-12-06 00:12:07.728983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:35.221 [2024-12-06 00:12:07.728992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.729015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.729030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:35.221 [2024-12-06 00:12:07.729038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:35.221 [2024-12-06 00:12:07.729047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.729079] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:35.221 [2024-12-06 00:12:07.729088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.729096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:35.221 [2024-12-06 00:12:07.729105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:35.221 [2024-12-06 00:12:07.729112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.755866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.755920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:35.221 [2024-12-06 00:12:07.755934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.732 ms 00:34:35.221 [2024-12-06 00:12:07.755943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.756036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:35.221 [2024-12-06 00:12:07.756048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:35.221 [2024-12-06 00:12:07.756058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:35.221 [2024-12-06 00:12:07.756067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:35.221 [2024-12-06 00:12:07.757266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.848 ms, result 0 00:34:36.166  [2024-12-06T00:12:09.820Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-06T00:12:11.208Z] Copying: 25/1024 [MB] (11 MBps) [2024-12-06T00:12:11.792Z] Copying: 37/1024 [MB] (12 MBps) [2024-12-06T00:12:12.811Z] Copying: 47/1024 [MB] (10 MBps) [2024-12-06T00:12:14.195Z] Copying: 58/1024 [MB] (10 MBps) [2024-12-06T00:12:15.141Z] Copying: 69/1024 [MB] (10 MBps) [2024-12-06T00:12:16.080Z] Copying: 80/1024 [MB] (11 MBps) [2024-12-06T00:12:17.024Z] Copying: 91/1024 [MB] (10 MBps) [2024-12-06T00:12:17.968Z] Copying: 110/1024 [MB] (19 MBps) [2024-12-06T00:12:18.912Z] Copying: 130/1024 [MB] (19 MBps) [2024-12-06T00:12:19.855Z] Copying: 156/1024 [MB] (25 MBps) [2024-12-06T00:12:20.799Z] Copying: 190/1024 [MB] (33 MBps) [2024-12-06T00:12:22.183Z] Copying: 203/1024 [MB] (13 MBps) [2024-12-06T00:12:23.126Z] Copying: 222/1024 [MB] (19 MBps) [2024-12-06T00:12:24.072Z] Copying: 239/1024 [MB] (17 MBps) [2024-12-06T00:12:25.014Z] Copying: 252/1024 [MB] (12 MBps) [2024-12-06T00:12:25.958Z] Copying: 268/1024 [MB] (16 MBps) [2024-12-06T00:12:26.903Z] Copying: 283/1024 [MB] (15 MBps) [2024-12-06T00:12:27.847Z] Copying: 313/1024 [MB] (30 MBps) [2024-12-06T00:12:28.792Z] Copying: 352/1024 [MB] (38 MBps) [2024-12-06T00:12:30.182Z] Copying: 390/1024 [MB] (38 MBps) [2024-12-06T00:12:31.127Z] Copying: 404/1024 [MB] (13 MBps) [2024-12-06T00:12:32.072Z] Copying: 421/1024 [MB] (17 MBps) [2024-12-06T00:12:33.017Z] Copying: 459/1024 [MB] (37 MBps) [2024-12-06T00:12:33.963Z] Copying: 480/1024 [MB] (20 MBps) [2024-12-06T00:12:34.907Z] Copying: 516/1024 [MB] (36 MBps) [2024-12-06T00:12:35.849Z] Copying: 535/1024 [MB] (19 MBps) [2024-12-06T00:12:36.792Z] Copying: 574/1024 [MB] (38 MBps) [2024-12-06T00:12:38.180Z] Copying: 608/1024 [MB] (34 MBps) [2024-12-06T00:12:39.124Z] Copying: 624/1024 [MB] (16 MBps) [2024-12-06T00:12:40.064Z] Copying: 640/1024 [MB] (15 MBps) [2024-12-06T00:12:41.008Z] Copying: 653/1024 [MB] (12 MBps) [2024-12-06T00:12:41.948Z] Copying: 671/1024 [MB] (17 MBps) [2024-12-06T00:12:42.890Z] Copying: 694/1024 [MB] (23 MBps) [2024-12-06T00:12:43.866Z] Copying: 713/1024 [MB] (18 MBps) [2024-12-06T00:12:44.853Z] Copying: 730/1024 [MB] (17 MBps) [2024-12-06T00:12:45.796Z] Copying: 744/1024 [MB] (13 MBps) [2024-12-06T00:12:47.205Z] Copying: 762/1024 [MB] (17 MBps) [2024-12-06T00:12:47.778Z] Copying: 782/1024 [MB] (19 MBps) [2024-12-06T00:12:49.166Z] Copying: 792/1024 [MB] (10 MBps) [2024-12-06T00:12:50.110Z] Copying: 802/1024 [MB] (10 MBps) [2024-12-06T00:12:51.053Z] Copying: 845/1024 [MB] (43 MBps) [2024-12-06T00:12:51.998Z] Copying: 884/1024 [MB] (39 MBps) [2024-12-06T00:12:52.941Z] Copying: 900/1024 [MB] (15 MBps) [2024-12-06T00:12:53.888Z] Copying: 923/1024 [MB] (23 MBps) [2024-12-06T00:12:54.832Z] Copying: 934/1024 [MB] (10 MBps) [2024-12-06T00:12:55.800Z] Copying: 944/1024 [MB] (10 MBps) [2024-12-06T00:12:57.182Z] Copying: 962/1024 [MB] (17 MBps) [2024-12-06T00:12:58.122Z] Copying: 972/1024 [MB] (10 MBps) [2024-12-06T00:12:59.065Z] Copying: 983/1024 [MB] (10 MBps) [2024-12-06T00:13:00.011Z] Copying: 1000/1024 [MB] (17 MBps) [2024-12-06T00:13:00.956Z] Copying: 1014/1024 [MB] (13 MBps) [2024-12-06T00:13:01.531Z] Copying: 1048124/1048576 [kB] (9172 kBps) [2024-12-06T00:13:01.531Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-06 00:13:01.278667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.822 [2024-12-06 00:13:01.278752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:28.822 [2024-12-06 00:13:01.278771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:28.822 [2024-12-06 00:13:01.278781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.822 [2024-12-06 00:13:01.279374] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:28.822 [2024-12-06 00:13:01.285772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.822 [2024-12-06 00:13:01.285824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:28.822 [2024-12-06 00:13:01.285836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:35:28.822 [2024-12-06 00:13:01.285844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.822 [2024-12-06 00:13:01.296810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.822 [2024-12-06 00:13:01.296864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:28.822 [2024-12-06 00:13:01.296876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.374 ms 00:35:28.822 [2024-12-06 00:13:01.296885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.822 [2024-12-06 00:13:01.296916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.822 [2024-12-06 00:13:01.296926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:28.822 [2024-12-06 00:13:01.296935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:28.822 [2024-12-06 00:13:01.296944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.822 [2024-12-06 00:13:01.297020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.822 [2024-12-06 00:13:01.297035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:28.822 [2024-12-06 00:13:01.297044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:35:28.822 [2024-12-06 00:13:01.297051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.822 [2024-12-06 00:13:01.297066] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:28.822 [2024-12-06 00:13:01.297078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125440 / 261120 wr_cnt: 1 state: open 00:35:28.822 [2024-12-06 00:13:01.297088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:28.822 [2024-12-06 00:13:01.297468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:28.823 [2024-12-06 00:13:01.297902] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:28.823 [2024-12-06 00:13:01.297910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4fb7d2de-2db9-4360-a868-f6ce287ca9bb 00:35:28.823 [2024-12-06 00:13:01.297919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125440 00:35:28.823 [2024-12-06 00:13:01.297927] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125472 00:35:28.823 [2024-12-06 00:13:01.297935] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125440 00:35:28.823 [2024-12-06 00:13:01.297943] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:35:28.823 [2024-12-06 00:13:01.297953] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:28.823 [2024-12-06 00:13:01.297961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:28.823 [2024-12-06 00:13:01.297981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:28.823 [2024-12-06 00:13:01.297989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:28.823 [2024-12-06 00:13:01.297996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:28.823 [2024-12-06 00:13:01.298003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.823 [2024-12-06 00:13:01.298012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:28.823 [2024-12-06 00:13:01.298020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:35:28.823 [2024-12-06 00:13:01.298028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.823 [2024-12-06 00:13:01.311376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.823 [2024-12-06 00:13:01.311426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:28.823 [2024-12-06 00:13:01.311444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.333 ms 00:35:28.823 [2024-12-06 00:13:01.311453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.823 [2024-12-06 00:13:01.311854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:28.823 [2024-12-06 00:13:01.311871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:28.823 [2024-12-06 00:13:01.311881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:35:28.823 [2024-12-06 00:13:01.311889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.823 [2024-12-06 00:13:01.348077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.823 [2024-12-06 00:13:01.348147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:28.823 [2024-12-06 00:13:01.348158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.823 [2024-12-06 00:13:01.348167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.823 [2024-12-06 00:13:01.348235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.823 [2024-12-06 00:13:01.348245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:28.823 [2024-12-06 00:13:01.348254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.823 [2024-12-06 00:13:01.348263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.823 [2024-12-06 00:13:01.348360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.823 [2024-12-06 00:13:01.348373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:28.823 [2024-12-06 00:13:01.348386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.823 [2024-12-06 00:13:01.348394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.823 [2024-12-06 00:13:01.348413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.823 [2024-12-06 00:13:01.348422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:28.823 [2024-12-06 00:13:01.348431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.823 [2024-12-06 00:13:01.348441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.823 [2024-12-06 00:13:01.434471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.823 [2024-12-06 00:13:01.434537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:28.823 [2024-12-06 00:13:01.434551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.823 [2024-12-06 00:13:01.434559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.823 [2024-12-06 00:13:01.502642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.823 [2024-12-06 00:13:01.502710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:28.823 [2024-12-06 00:13:01.502723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.823 [2024-12-06 00:13:01.502732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.824 [2024-12-06 00:13:01.502792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.824 [2024-12-06 00:13:01.502803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:28.824 [2024-12-06 00:13:01.502812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.824 [2024-12-06 00:13:01.502825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.824 [2024-12-06 00:13:01.502887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.824 [2024-12-06 00:13:01.502898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:28.824 [2024-12-06 00:13:01.502906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.824 [2024-12-06 00:13:01.502914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.824 [2024-12-06 00:13:01.503019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.824 [2024-12-06 00:13:01.503031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:28.824 [2024-12-06 00:13:01.503039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.824 [2024-12-06 00:13:01.503048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.824 [2024-12-06 00:13:01.503079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.824 [2024-12-06 00:13:01.503088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:28.824 [2024-12-06 00:13:01.503096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.824 [2024-12-06 00:13:01.503105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.824 [2024-12-06 00:13:01.503145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.824 [2024-12-06 00:13:01.503155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:28.824 [2024-12-06 00:13:01.503163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.824 [2024-12-06 00:13:01.503172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.824 [2024-12-06 00:13:01.503221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:28.824 [2024-12-06 00:13:01.503231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:28.824 [2024-12-06 00:13:01.503240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:28.824 [2024-12-06 00:13:01.503248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:28.824 [2024-12-06 00:13:01.503381] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 227.119 ms, result 0 00:35:30.739 00:35:30.740 00:35:30.740 00:13:03 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:30.740 [2024-12-06 00:13:03.157357] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:35:30.740 [2024-12-06 00:13:03.157521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86779 ] 00:35:30.740 [2024-12-06 00:13:03.319742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.740 [2024-12-06 00:13:03.437538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.314 [2024-12-06 00:13:03.736773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:31.314 [2024-12-06 00:13:03.736862] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:31.314 [2024-12-06 00:13:03.897279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.897339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:31.314 [2024-12-06 00:13:03.897355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:31.314 [2024-12-06 00:13:03.897364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.897418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.897431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:31.314 [2024-12-06 00:13:03.897441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:35:31.314 [2024-12-06 00:13:03.897448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.897469] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:31.314 [2024-12-06 00:13:03.898227] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:31.314 [2024-12-06 00:13:03.898247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.898255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:31.314 [2024-12-06 00:13:03.898264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:35:31.314 [2024-12-06 00:13:03.898271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.898560] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:31.314 [2024-12-06 00:13:03.898586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.898598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:31.314 [2024-12-06 00:13:03.898609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:31.314 [2024-12-06 00:13:03.898616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.898669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.898679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:31.314 [2024-12-06 00:13:03.898687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:35:31.314 [2024-12-06 00:13:03.898694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.899037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.899059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:31.314 [2024-12-06 00:13:03.899072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:35:31.314 [2024-12-06 00:13:03.899080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.899153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.899163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:31.314 [2024-12-06 00:13:03.899171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:35:31.314 [2024-12-06 00:13:03.899179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.899203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.899212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:31.314 [2024-12-06 00:13:03.899223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:31.314 [2024-12-06 00:13:03.899230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.899252] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:31.314 [2024-12-06 00:13:03.903457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.903502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:31.314 [2024-12-06 00:13:03.903513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.210 ms 00:35:31.314 [2024-12-06 00:13:03.903521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.903560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.903569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:31.314 [2024-12-06 00:13:03.903577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:35:31.314 [2024-12-06 00:13:03.903584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.903642] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:31.314 [2024-12-06 00:13:03.903666] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:31.314 [2024-12-06 00:13:03.903703] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:31.314 [2024-12-06 00:13:03.903718] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:31.314 [2024-12-06 00:13:03.903823] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:31.314 [2024-12-06 00:13:03.903834] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:31.314 [2024-12-06 00:13:03.903844] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:31.314 [2024-12-06 00:13:03.903856] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:31.314 [2024-12-06 00:13:03.903865] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:31.314 [2024-12-06 00:13:03.903875] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:31.314 [2024-12-06 00:13:03.903883] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:31.314 [2024-12-06 00:13:03.903890] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:31.314 [2024-12-06 00:13:03.903898] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:31.314 [2024-12-06 00:13:03.903907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.903914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:31.314 [2024-12-06 00:13:03.903922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:35:31.314 [2024-12-06 00:13:03.903930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.314 [2024-12-06 00:13:03.904029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.314 [2024-12-06 00:13:03.904038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:31.314 [2024-12-06 00:13:03.904046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:35:31.314 [2024-12-06 00:13:03.904056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.315 [2024-12-06 00:13:03.904160] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:31.315 [2024-12-06 00:13:03.904170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:31.315 [2024-12-06 00:13:03.904178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:31.315 [2024-12-06 00:13:03.904186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:31.315 [2024-12-06 00:13:03.904200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:31.315 [2024-12-06 00:13:03.904214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:31.315 [2024-12-06 00:13:03.904222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:31.315 [2024-12-06 00:13:03.904235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:31.315 [2024-12-06 00:13:03.904241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:31.315 [2024-12-06 00:13:03.904247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:31.315 [2024-12-06 00:13:03.904254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:31.315 [2024-12-06 00:13:03.904263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:31.315 [2024-12-06 00:13:03.904277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:31.315 [2024-12-06 00:13:03.904290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:31.315 [2024-12-06 00:13:03.904297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:31.315 [2024-12-06 00:13:03.904311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:31.315 [2024-12-06 00:13:03.904325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:31.315 [2024-12-06 00:13:03.904349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:31.315 [2024-12-06 00:13:03.904363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:31.315 [2024-12-06 00:13:03.904369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:31.315 [2024-12-06 00:13:03.904384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:31.315 [2024-12-06 00:13:03.904390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:31.315 [2024-12-06 00:13:03.904404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:31.315 [2024-12-06 00:13:03.904410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:31.315 [2024-12-06 00:13:03.904424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:31.315 [2024-12-06 00:13:03.904431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:31.315 [2024-12-06 00:13:03.904437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:31.315 [2024-12-06 00:13:03.904445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:31.315 [2024-12-06 00:13:03.904451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:31.315 [2024-12-06 00:13:03.904458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:31.315 [2024-12-06 00:13:03.904472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:31.315 [2024-12-06 00:13:03.904487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904494] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:31.315 [2024-12-06 00:13:03.904502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:31.315 [2024-12-06 00:13:03.904509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:31.315 [2024-12-06 00:13:03.904517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.315 [2024-12-06 00:13:03.904528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:31.315 [2024-12-06 00:13:03.904535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:31.315 [2024-12-06 00:13:03.904542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:31.315 [2024-12-06 00:13:03.904549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:31.315 [2024-12-06 00:13:03.904555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:31.315 [2024-12-06 00:13:03.904561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:31.315 [2024-12-06 00:13:03.904570] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:31.315 [2024-12-06 00:13:03.904580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:31.315 [2024-12-06 00:13:03.904588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:31.315 [2024-12-06 00:13:03.904596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:31.315 [2024-12-06 00:13:03.904603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:31.315 [2024-12-06 00:13:03.904610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:31.315 [2024-12-06 00:13:03.904617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:31.315 [2024-12-06 00:13:03.904624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:31.315 [2024-12-06 00:13:03.904632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:31.315 [2024-12-06 00:13:03.904639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:31.315 [2024-12-06 00:13:03.904646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:31.315 [2024-12-06 00:13:03.904654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:31.315 [2024-12-06 00:13:03.904661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:31.315 [2024-12-06 00:13:03.904668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:31.315 [2024-12-06 00:13:03.904675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:31.315 [2024-12-06 00:13:03.904684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:31.315 [2024-12-06 00:13:03.904690] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:31.315 [2024-12-06 00:13:03.904698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:31.315 [2024-12-06 00:13:03.904707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:31.315 [2024-12-06 00:13:03.904715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:31.315 [2024-12-06 00:13:03.904722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:31.315 [2024-12-06 00:13:03.904729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:31.315 [2024-12-06 00:13:03.904737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.315 [2024-12-06 00:13:03.904745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:31.315 [2024-12-06 00:13:03.904753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:35:31.315 [2024-12-06 00:13:03.904761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.315 [2024-12-06 00:13:03.933340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.315 [2024-12-06 00:13:03.933397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:31.315 [2024-12-06 00:13:03.933410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.535 ms 00:35:31.315 [2024-12-06 00:13:03.933418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.315 [2024-12-06 00:13:03.933510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.315 [2024-12-06 00:13:03.933519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:31.315 [2024-12-06 00:13:03.933532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:35:31.315 [2024-12-06 00:13:03.933540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.315 [2024-12-06 00:13:03.982993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.315 [2024-12-06 00:13:03.983051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:31.315 [2024-12-06 00:13:03.983064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.392 ms 00:35:31.315 [2024-12-06 00:13:03.983072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.315 [2024-12-06 00:13:03.983123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.315 [2024-12-06 00:13:03.983133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:31.315 [2024-12-06 00:13:03.983143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:31.315 [2024-12-06 00:13:03.983152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.315 [2024-12-06 00:13:03.983261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.315 [2024-12-06 00:13:03.983272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:31.315 [2024-12-06 00:13:03.983282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:35:31.315 [2024-12-06 00:13:03.983290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.315 [2024-12-06 00:13:03.983418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.316 [2024-12-06 00:13:03.983431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:31.316 [2024-12-06 00:13:03.983439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:35:31.316 [2024-12-06 00:13:03.983447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.316 [2024-12-06 00:13:03.998849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.316 [2024-12-06 00:13:03.998902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:31.316 [2024-12-06 00:13:03.998914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.383 ms 00:35:31.316 [2024-12-06 00:13:03.998923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.316 [2024-12-06 00:13:03.999105] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:31.316 [2024-12-06 00:13:03.999120] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:31.316 [2024-12-06 00:13:03.999133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.316 [2024-12-06 00:13:03.999141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:31.316 [2024-12-06 00:13:03.999149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:35:31.316 [2024-12-06 00:13:03.999157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.316 [2024-12-06 00:13:04.011444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.316 [2024-12-06 00:13:04.011490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:31.316 [2024-12-06 00:13:04.011502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.270 ms 00:35:31.316 [2024-12-06 00:13:04.011509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.316 [2024-12-06 00:13:04.011637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.316 [2024-12-06 00:13:04.011647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:31.316 [2024-12-06 00:13:04.011655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:35:31.316 [2024-12-06 00:13:04.011668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.316 [2024-12-06 00:13:04.011718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.316 [2024-12-06 00:13:04.011728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:31.316 [2024-12-06 00:13:04.011736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:35:31.316 [2024-12-06 00:13:04.011751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.316 [2024-12-06 00:13:04.012370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.316 [2024-12-06 00:13:04.012393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:31.316 [2024-12-06 00:13:04.012403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:35:31.316 [2024-12-06 00:13:04.012411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.316 [2024-12-06 00:13:04.012433] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:31.316 [2024-12-06 00:13:04.012444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.316 [2024-12-06 00:13:04.012452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:31.316 [2024-12-06 00:13:04.012460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:31.316 [2024-12-06 00:13:04.012468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.577 [2024-12-06 00:13:04.024855] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:31.577 [2024-12-06 00:13:04.025042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.577 [2024-12-06 00:13:04.025054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:31.577 [2024-12-06 00:13:04.025065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.554 ms 00:35:31.577 [2024-12-06 00:13:04.025073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.577 [2024-12-06 00:13:04.027212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.577 [2024-12-06 00:13:04.027247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:31.577 [2024-12-06 00:13:04.027256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.115 ms 00:35:31.577 [2024-12-06 00:13:04.027263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.577 [2024-12-06 00:13:04.027339] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:35:31.577 [2024-12-06 00:13:04.027789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.577 [2024-12-06 00:13:04.027811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:31.577 [2024-12-06 00:13:04.027821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:35:31.577 [2024-12-06 00:13:04.027829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.577 [2024-12-06 00:13:04.027857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.577 [2024-12-06 00:13:04.027867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:31.577 [2024-12-06 00:13:04.027877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:31.577 [2024-12-06 00:13:04.027884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.577 [2024-12-06 00:13:04.027918] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:31.577 [2024-12-06 00:13:04.027927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.577 [2024-12-06 00:13:04.027935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:31.577 [2024-12-06 00:13:04.027943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:31.577 [2024-12-06 00:13:04.027952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.577 [2024-12-06 00:13:04.054391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.578 [2024-12-06 00:13:04.054444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:31.578 [2024-12-06 00:13:04.054456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.405 ms 00:35:31.578 [2024-12-06 00:13:04.054464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.578 [2024-12-06 00:13:04.054547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.578 [2024-12-06 00:13:04.054558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:31.578 [2024-12-06 00:13:04.054567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:35:31.578 [2024-12-06 00:13:04.054576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.578 [2024-12-06 00:13:04.055772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.052 ms, result 0 00:35:32.966  [2024-12-06T00:13:06.618Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-06T00:13:07.559Z] Copying: 31/1024 [MB] (17 MBps) [2024-12-06T00:13:08.502Z] Copying: 42/1024 [MB] (10 MBps) [2024-12-06T00:13:09.440Z] Copying: 53/1024 [MB] (11 MBps) [2024-12-06T00:13:10.381Z] Copying: 64/1024 [MB] (10 MBps) [2024-12-06T00:13:11.321Z] Copying: 78/1024 [MB] (14 MBps) [2024-12-06T00:13:12.263Z] Copying: 99/1024 [MB] (20 MBps) [2024-12-06T00:13:13.652Z] Copying: 117/1024 [MB] (18 MBps) [2024-12-06T00:13:14.595Z] Copying: 134/1024 [MB] (17 MBps) [2024-12-06T00:13:15.553Z] Copying: 148/1024 [MB] (14 MBps) [2024-12-06T00:13:16.499Z] Copying: 167/1024 [MB] (18 MBps) [2024-12-06T00:13:17.441Z] Copying: 180/1024 [MB] (12 MBps) [2024-12-06T00:13:18.384Z] Copying: 197/1024 [MB] (17 MBps) [2024-12-06T00:13:19.328Z] Copying: 218/1024 [MB] (21 MBps) [2024-12-06T00:13:20.273Z] Copying: 237/1024 [MB] (18 MBps) [2024-12-06T00:13:21.661Z] Copying: 251/1024 [MB] (14 MBps) [2024-12-06T00:13:22.605Z] Copying: 265/1024 [MB] (14 MBps) [2024-12-06T00:13:23.550Z] Copying: 285/1024 [MB] (19 MBps) [2024-12-06T00:13:24.507Z] Copying: 303/1024 [MB] (17 MBps) [2024-12-06T00:13:25.450Z] Copying: 319/1024 [MB] (16 MBps) [2024-12-06T00:13:26.392Z] Copying: 331/1024 [MB] (12 MBps) [2024-12-06T00:13:27.333Z] Copying: 342/1024 [MB] (10 MBps) [2024-12-06T00:13:28.272Z] Copying: 365/1024 [MB] (22 MBps) [2024-12-06T00:13:29.657Z] Copying: 377/1024 [MB] (12 MBps) [2024-12-06T00:13:30.603Z] Copying: 398/1024 [MB] (20 MBps) [2024-12-06T00:13:31.550Z] Copying: 411/1024 [MB] (12 MBps) [2024-12-06T00:13:32.497Z] Copying: 431/1024 [MB] (19 MBps) [2024-12-06T00:13:33.443Z] Copying: 442/1024 [MB] (11 MBps) [2024-12-06T00:13:34.387Z] Copying: 461/1024 [MB] (19 MBps) [2024-12-06T00:13:35.329Z] Copying: 474/1024 [MB] (13 MBps) [2024-12-06T00:13:36.268Z] Copying: 490/1024 [MB] (15 MBps) [2024-12-06T00:13:37.649Z] Copying: 501/1024 [MB] (11 MBps) [2024-12-06T00:13:38.591Z] Copying: 520/1024 [MB] (18 MBps) [2024-12-06T00:13:39.531Z] Copying: 543/1024 [MB] (22 MBps) [2024-12-06T00:13:40.476Z] Copying: 564/1024 [MB] (21 MBps) [2024-12-06T00:13:41.422Z] Copying: 582/1024 [MB] (17 MBps) [2024-12-06T00:13:42.367Z] Copying: 602/1024 [MB] (20 MBps) [2024-12-06T00:13:43.309Z] Copying: 620/1024 [MB] (17 MBps) [2024-12-06T00:13:44.697Z] Copying: 639/1024 [MB] (19 MBps) [2024-12-06T00:13:45.269Z] Copying: 660/1024 [MB] (20 MBps) [2024-12-06T00:13:46.659Z] Copying: 675/1024 [MB] (15 MBps) [2024-12-06T00:13:47.285Z] Copying: 686/1024 [MB] (10 MBps) [2024-12-06T00:13:48.675Z] Copying: 696/1024 [MB] (10 MBps) [2024-12-06T00:13:49.621Z] Copying: 718/1024 [MB] (21 MBps) [2024-12-06T00:13:50.566Z] Copying: 735/1024 [MB] (17 MBps) [2024-12-06T00:13:51.512Z] Copying: 746/1024 [MB] (10 MBps) [2024-12-06T00:13:52.456Z] Copying: 756/1024 [MB] (10 MBps) [2024-12-06T00:13:53.401Z] Copying: 770/1024 [MB] (13 MBps) [2024-12-06T00:13:54.345Z] Copying: 782/1024 [MB] (11 MBps) [2024-12-06T00:13:55.294Z] Copying: 802/1024 [MB] (20 MBps) [2024-12-06T00:13:56.676Z] Copying: 821/1024 [MB] (18 MBps) [2024-12-06T00:13:57.618Z] Copying: 837/1024 [MB] (16 MBps) [2024-12-06T00:13:58.561Z] Copying: 852/1024 [MB] (14 MBps) [2024-12-06T00:13:59.506Z] Copying: 870/1024 [MB] (18 MBps) [2024-12-06T00:14:00.451Z] Copying: 881/1024 [MB] (10 MBps) [2024-12-06T00:14:01.395Z] Copying: 892/1024 [MB] (10 MBps) [2024-12-06T00:14:02.338Z] Copying: 902/1024 [MB] (10 MBps) [2024-12-06T00:14:03.283Z] Copying: 925/1024 [MB] (23 MBps) [2024-12-06T00:14:04.671Z] Copying: 944/1024 [MB] (19 MBps) [2024-12-06T00:14:05.615Z] Copying: 964/1024 [MB] (19 MBps) [2024-12-06T00:14:06.556Z] Copying: 991/1024 [MB] (26 MBps) [2024-12-06T00:14:06.815Z] Copying: 1008/1024 [MB] (17 MBps) [2024-12-06T00:14:07.386Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-06 00:14:07.085124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:34.677 [2024-12-06 00:14:07.085207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:34.677 [2024-12-06 00:14:07.085224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:34.678 [2024-12-06 00:14:07.085234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.678 [2024-12-06 00:14:07.085260] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:34.678 [2024-12-06 00:14:07.088391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:34.678 [2024-12-06 00:14:07.088441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:34.678 [2024-12-06 00:14:07.088453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:36:34.678 [2024-12-06 00:14:07.088470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.678 [2024-12-06 00:14:07.088851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:34.678 [2024-12-06 00:14:07.088897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:34.678 [2024-12-06 00:14:07.088908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:36:34.678 [2024-12-06 00:14:07.088917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.678 [2024-12-06 00:14:07.088947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:34.678 [2024-12-06 00:14:07.088957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:34.678 [2024-12-06 00:14:07.088977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:34.678 [2024-12-06 00:14:07.088986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.678 [2024-12-06 00:14:07.089052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:34.678 [2024-12-06 00:14:07.089065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:34.678 [2024-12-06 00:14:07.089074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:36:34.678 [2024-12-06 00:14:07.089083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.678 [2024-12-06 00:14:07.089098] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:34.678 [2024-12-06 00:14:07.089111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:34.678 [2024-12-06 00:14:07.089121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:34.678 [2024-12-06 00:14:07.089985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.089994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:34.679 [2024-12-06 00:14:07.090237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:34.679 [2024-12-06 00:14:07.090245] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4fb7d2de-2db9-4360-a868-f6ce287ca9bb 00:36:34.679 [2024-12-06 00:14:07.090253] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:34.679 [2024-12-06 00:14:07.090260] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5664 00:36:34.679 [2024-12-06 00:14:07.090269] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5632 00:36:34.679 [2024-12-06 00:14:07.090281] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0057 00:36:34.679 [2024-12-06 00:14:07.090288] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:34.679 [2024-12-06 00:14:07.090297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:34.679 [2024-12-06 00:14:07.090305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:34.679 [2024-12-06 00:14:07.090312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:34.679 [2024-12-06 00:14:07.090319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:34.679 [2024-12-06 00:14:07.090326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:34.679 [2024-12-06 00:14:07.090335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:34.679 [2024-12-06 00:14:07.090342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:36:34.679 [2024-12-06 00:14:07.090350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.105294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:34.679 [2024-12-06 00:14:07.105351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:34.679 [2024-12-06 00:14:07.105371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.924 ms 00:36:34.679 [2024-12-06 00:14:07.105386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.105790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:34.679 [2024-12-06 00:14:07.105811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:34.679 [2024-12-06 00:14:07.105822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:36:34.679 [2024-12-06 00:14:07.105830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.142500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.142560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:34.679 [2024-12-06 00:14:07.142573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.142582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.142658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.142669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:34.679 [2024-12-06 00:14:07.142679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.142689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.142751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.142767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:34.679 [2024-12-06 00:14:07.142777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.142786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.142804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.142812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:34.679 [2024-12-06 00:14:07.142820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.142827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.228281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.228347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:34.679 [2024-12-06 00:14:07.228361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.228369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.298065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.298128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:34.679 [2024-12-06 00:14:07.298140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.298149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.298235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.298245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:34.679 [2024-12-06 00:14:07.298262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.298271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.298311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.298321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:34.679 [2024-12-06 00:14:07.298330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.298339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.298424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.298435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:34.679 [2024-12-06 00:14:07.298443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.298455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.298483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.298492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:34.679 [2024-12-06 00:14:07.298501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.298509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.298551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.298561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:34.679 [2024-12-06 00:14:07.298569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.298579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.298626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:34.679 [2024-12-06 00:14:07.298637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:34.679 [2024-12-06 00:14:07.298645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:34.679 [2024-12-06 00:14:07.298653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:34.679 [2024-12-06 00:14:07.298791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 213.631 ms, result 0 00:36:35.620 00:36:35.620 00:36:35.620 00:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:37.531 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:37.531 00:14:10 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:37.531 00:14:10 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:36:37.531 00:14:10 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:37.792 00:14:10 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:37.792 00:14:10 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:37.792 00:14:10 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 84637 00:36:37.792 00:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84637 ']' 00:36:37.792 00:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84637 00:36:37.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84637) - No such process 00:36:37.792 Process with pid 84637 is not found 00:36:37.792 00:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 84637 is not found' 00:36:37.792 00:14:10 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:36:37.792 Remove shared memory files 00:36:37.792 00:14:10 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:37.792 00:14:10 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:36:37.793 00:14:10 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_band_md /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_l2p_l1 /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_l2p_l2 /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_l2p_l2_ctx /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_nvc_md /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_p2l_pool /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_sb /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_sb_shm /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_trim_bitmap /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_trim_log /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_trim_md /dev/hugepages/ftl_4fb7d2de-2db9-4360-a868-f6ce287ca9bb_vmap 00:36:37.793 00:14:10 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:36:37.793 00:14:10 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:37.793 00:14:10 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:36:37.793 00:36:37.793 real 4m41.837s 00:36:37.793 user 4m29.692s 00:36:37.793 sys 0m11.717s 00:36:37.793 00:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:37.793 ************************************ 00:36:37.793 END TEST ftl_restore_fast 00:36:37.793 00:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:36:37.793 ************************************ 00:36:37.793 00:14:10 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:37.793 00:14:10 ftl -- ftl/ftl.sh@14 -- # killprocess 75040 00:36:37.793 00:14:10 ftl -- common/autotest_common.sh@954 -- # '[' -z 75040 ']' 00:36:37.793 Process with pid 75040 is not found 00:36:37.793 00:14:10 ftl -- common/autotest_common.sh@958 -- # kill -0 75040 00:36:37.793 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75040) - No such process 00:36:37.793 00:14:10 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75040 is not found' 00:36:37.793 00:14:10 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:37.793 00:14:10 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87470 00:36:37.793 00:14:10 ftl -- ftl/ftl.sh@20 -- # waitforlisten 87470 00:36:37.793 00:14:10 ftl -- common/autotest_common.sh@835 -- # '[' -z 87470 ']' 00:36:37.793 00:14:10 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.793 00:14:10 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:37.793 00:14:10 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.793 00:14:10 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:37.793 00:14:10 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:37.793 00:14:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:37.793 [2024-12-06 00:14:10.398323] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 24.03.0 initialization... 00:36:37.793 [2024-12-06 00:14:10.398457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87470 ] 00:36:38.054 [2024-12-06 00:14:10.563727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.054 [2024-12-06 00:14:10.686953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.998 00:14:11 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:38.998 00:14:11 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:38.998 00:14:11 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:38.998 nvme0n1 00:36:38.998 00:14:11 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:38.998 00:14:11 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:38.998 00:14:11 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:39.259 00:14:11 ftl -- ftl/common.sh@28 -- # stores=0795ea50-0180-4d5e-a865-06c1bf296143 00:36:39.259 00:14:11 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:39.259 00:14:11 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0795ea50-0180-4d5e-a865-06c1bf296143 00:36:39.519 00:14:12 ftl -- ftl/ftl.sh@23 -- # killprocess 87470 00:36:39.519 00:14:12 ftl -- common/autotest_common.sh@954 -- # '[' -z 87470 ']' 00:36:39.519 00:14:12 ftl -- common/autotest_common.sh@958 -- # kill -0 87470 00:36:39.519 00:14:12 ftl -- common/autotest_common.sh@959 -- # uname 00:36:39.519 00:14:12 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.519 00:14:12 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87470 00:36:39.519 00:14:12 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:39.519 00:14:12 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:39.519 00:14:12 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87470' 00:36:39.519 killing process with pid 87470 00:36:39.519 00:14:12 ftl -- common/autotest_common.sh@973 -- # kill 87470 00:36:39.519 00:14:12 ftl -- common/autotest_common.sh@978 -- # wait 87470 00:36:40.906 00:14:13 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:41.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:41.167 Waiting for block devices as requested 00:36:41.167 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:41.167 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:41.428 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:41.428 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:46.718 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:46.718 00:14:19 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:46.718 Remove shared memory files 00:36:46.718 00:14:19 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:46.718 00:14:19 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:46.718 00:14:19 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:46.718 00:14:19 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:46.718 00:14:19 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:46.718 00:14:19 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:46.718 ************************************ 00:36:46.718 END TEST ftl 00:36:46.718 ************************************ 00:36:46.718 00:36:46.718 real 19m1.289s 00:36:46.718 user 21m10.905s 00:36:46.718 sys 1m29.387s 00:36:46.718 00:14:19 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:46.718 00:14:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:46.718 00:14:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:46.718 00:14:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:46.718 00:14:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:46.718 00:14:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:46.718 00:14:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:46.718 00:14:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:46.718 00:14:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:46.719 00:14:19 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:46.719 00:14:19 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:46.719 00:14:19 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:46.719 00:14:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:46.719 00:14:19 -- common/autotest_common.sh@10 -- # set +x 00:36:46.719 00:14:19 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:46.719 00:14:19 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:46.719 00:14:19 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:46.719 00:14:19 -- common/autotest_common.sh@10 -- # set +x 00:36:48.126 INFO: APP EXITING 00:36:48.126 INFO: killing all VMs 00:36:48.126 INFO: killing vhost app 00:36:48.126 INFO: EXIT DONE 00:36:48.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:48.757 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:48.757 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:48.757 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:48.757 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:49.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:49.281 Cleaning 00:36:49.281 Removing: /var/run/dpdk/spdk0/config 00:36:49.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:49.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:49.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:49.281 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:49.281 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:49.281 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:49.281 Removing: /var/run/dpdk/spdk0 00:36:49.281 Removing: /var/run/dpdk/spdk_pid56952 00:36:49.281 Removing: /var/run/dpdk/spdk_pid57158 00:36:49.281 Removing: /var/run/dpdk/spdk_pid57366 00:36:49.281 Removing: /var/run/dpdk/spdk_pid57459 00:36:49.281 Removing: /var/run/dpdk/spdk_pid57499 00:36:49.281 Removing: /var/run/dpdk/spdk_pid57626 00:36:49.281 Removing: /var/run/dpdk/spdk_pid57644 00:36:49.281 Removing: /var/run/dpdk/spdk_pid57838 00:36:49.281 Removing: /var/run/dpdk/spdk_pid57931 00:36:49.281 Removing: /var/run/dpdk/spdk_pid58022 00:36:49.281 Removing: /var/run/dpdk/spdk_pid58133 00:36:49.281 Removing: /var/run/dpdk/spdk_pid58230 00:36:49.281 Removing: /var/run/dpdk/spdk_pid58264 00:36:49.281 Removing: /var/run/dpdk/spdk_pid58300 00:36:49.281 Removing: /var/run/dpdk/spdk_pid58371 00:36:49.281 Removing: /var/run/dpdk/spdk_pid58455 00:36:49.281 Removing: /var/run/dpdk/spdk_pid58891 00:36:49.281 Removing: /var/run/dpdk/spdk_pid58944 00:36:49.281 Removing: /var/run/dpdk/spdk_pid58996 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59012 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59103 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59119 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59210 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59226 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59279 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59297 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59350 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59368 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59528 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59565 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59648 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59820 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59899 00:36:49.281 Removing: /var/run/dpdk/spdk_pid59935 00:36:49.281 Removing: /var/run/dpdk/spdk_pid60362 00:36:49.281 Removing: /var/run/dpdk/spdk_pid60460 00:36:49.281 Removing: /var/run/dpdk/spdk_pid60569 00:36:49.281 Removing: /var/run/dpdk/spdk_pid60622 00:36:49.281 Removing: /var/run/dpdk/spdk_pid60642 00:36:49.281 Removing: /var/run/dpdk/spdk_pid60728 00:36:49.281 Removing: /var/run/dpdk/spdk_pid61353 00:36:49.281 Removing: /var/run/dpdk/spdk_pid61389 00:36:49.281 Removing: /var/run/dpdk/spdk_pid61854 00:36:49.281 Removing: /var/run/dpdk/spdk_pid61952 00:36:49.281 Removing: /var/run/dpdk/spdk_pid62072 00:36:49.281 Removing: /var/run/dpdk/spdk_pid62125 00:36:49.281 Removing: /var/run/dpdk/spdk_pid62151 00:36:49.281 Removing: /var/run/dpdk/spdk_pid62176 00:36:49.281 Removing: /var/run/dpdk/spdk_pid64040 00:36:49.281 Removing: /var/run/dpdk/spdk_pid64171 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64181 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64193 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64237 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64241 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64253 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64300 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64304 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64316 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64361 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64365 00:36:49.543 Removing: /var/run/dpdk/spdk_pid64377 00:36:49.543 Removing: /var/run/dpdk/spdk_pid65772 00:36:49.543 Removing: /var/run/dpdk/spdk_pid65876 00:36:49.543 Removing: /var/run/dpdk/spdk_pid67296 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69030 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69104 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69179 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69284 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69376 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69472 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69540 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69615 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69725 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69822 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69912 00:36:49.543 Removing: /var/run/dpdk/spdk_pid69986 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70061 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70171 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70257 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70353 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70427 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70502 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70606 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70698 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70794 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70862 00:36:49.543 Removing: /var/run/dpdk/spdk_pid70941 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71014 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71088 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71197 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71282 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71381 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71451 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71525 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71605 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71680 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71778 00:36:49.543 Removing: /var/run/dpdk/spdk_pid71874 00:36:49.543 Removing: /var/run/dpdk/spdk_pid72018 00:36:49.543 Removing: /var/run/dpdk/spdk_pid72302 00:36:49.543 Removing: /var/run/dpdk/spdk_pid72344 00:36:49.543 Removing: /var/run/dpdk/spdk_pid72784 00:36:49.543 Removing: /var/run/dpdk/spdk_pid72958 00:36:49.543 Removing: /var/run/dpdk/spdk_pid73067 00:36:49.543 Removing: /var/run/dpdk/spdk_pid73178 00:36:49.543 Removing: /var/run/dpdk/spdk_pid73235 00:36:49.543 Removing: /var/run/dpdk/spdk_pid73262 00:36:49.543 Removing: /var/run/dpdk/spdk_pid73581 00:36:49.543 Removing: /var/run/dpdk/spdk_pid73637 00:36:49.543 Removing: /var/run/dpdk/spdk_pid73706 00:36:49.543 Removing: /var/run/dpdk/spdk_pid74094 00:36:49.543 Removing: /var/run/dpdk/spdk_pid74239 00:36:49.543 Removing: /var/run/dpdk/spdk_pid75040 00:36:49.543 Removing: /var/run/dpdk/spdk_pid75172 00:36:49.543 Removing: /var/run/dpdk/spdk_pid75350 00:36:49.543 Removing: /var/run/dpdk/spdk_pid75447 00:36:49.543 Removing: /var/run/dpdk/spdk_pid75750 00:36:49.543 Removing: /var/run/dpdk/spdk_pid75999 00:36:49.543 Removing: /var/run/dpdk/spdk_pid76345 00:36:49.543 Removing: /var/run/dpdk/spdk_pid76545 00:36:49.543 Removing: /var/run/dpdk/spdk_pid76718 00:36:49.543 Removing: /var/run/dpdk/spdk_pid76772 00:36:49.543 Removing: /var/run/dpdk/spdk_pid76965 00:36:49.543 Removing: /var/run/dpdk/spdk_pid76990 00:36:49.543 Removing: /var/run/dpdk/spdk_pid77044 00:36:49.543 Removing: /var/run/dpdk/spdk_pid77385 00:36:49.543 Removing: /var/run/dpdk/spdk_pid77635 00:36:49.543 Removing: /var/run/dpdk/spdk_pid78601 00:36:49.543 Removing: /var/run/dpdk/spdk_pid79409 00:36:49.543 Removing: /var/run/dpdk/spdk_pid80125 00:36:49.543 Removing: /var/run/dpdk/spdk_pid80927 00:36:49.543 Removing: /var/run/dpdk/spdk_pid81091 00:36:49.543 Removing: /var/run/dpdk/spdk_pid81171 00:36:49.543 Removing: /var/run/dpdk/spdk_pid81657 00:36:49.543 Removing: /var/run/dpdk/spdk_pid81711 00:36:49.543 Removing: /var/run/dpdk/spdk_pid82320 00:36:49.543 Removing: /var/run/dpdk/spdk_pid82795 00:36:49.543 Removing: /var/run/dpdk/spdk_pid83582 00:36:49.543 Removing: /var/run/dpdk/spdk_pid83710 00:36:49.543 Removing: /var/run/dpdk/spdk_pid83752 00:36:49.543 Removing: /var/run/dpdk/spdk_pid83810 00:36:49.543 Removing: /var/run/dpdk/spdk_pid83868 00:36:49.543 Removing: /var/run/dpdk/spdk_pid83934 00:36:49.543 Removing: /var/run/dpdk/spdk_pid84146 00:36:49.543 Removing: /var/run/dpdk/spdk_pid84228 00:36:49.543 Removing: /var/run/dpdk/spdk_pid84295 00:36:49.543 Removing: /var/run/dpdk/spdk_pid84352 00:36:49.543 Removing: /var/run/dpdk/spdk_pid84381 00:36:49.543 Removing: /var/run/dpdk/spdk_pid84470 00:36:49.543 Removing: /var/run/dpdk/spdk_pid84637 00:36:49.543 Removing: /var/run/dpdk/spdk_pid84862 00:36:49.543 Removing: /var/run/dpdk/spdk_pid85453 00:36:49.543 Removing: /var/run/dpdk/spdk_pid86226 00:36:49.543 Removing: /var/run/dpdk/spdk_pid86779 00:36:49.543 Removing: /var/run/dpdk/spdk_pid87470 00:36:49.543 Clean 00:36:49.804 00:14:22 -- common/autotest_common.sh@1453 -- # return 0 00:36:49.805 00:14:22 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:49.805 00:14:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:49.805 00:14:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.805 00:14:22 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:49.805 00:14:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:49.805 00:14:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.805 00:14:22 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:49.805 00:14:22 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:49.805 00:14:22 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:49.805 00:14:22 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:49.805 00:14:22 -- spdk/autotest.sh@398 -- # hostname 00:36:49.805 00:14:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:50.066 geninfo: WARNING: invalid characters removed from testname! 00:37:16.646 00:14:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:18.030 00:14:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:20.575 00:14:53 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:23.119 00:14:55 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:26.421 00:14:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:28.971 00:15:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:31.527 00:15:03 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:31.527 00:15:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:31.527 00:15:03 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:31.527 00:15:03 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:31.527 00:15:03 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:31.527 00:15:03 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:31.527 + [[ -n 5022 ]] 00:37:31.527 + sudo kill 5022 00:37:31.538 [Pipeline] } 00:37:31.550 [Pipeline] // timeout 00:37:31.555 [Pipeline] } 00:37:31.569 [Pipeline] // stage 00:37:31.574 [Pipeline] } 00:37:31.589 [Pipeline] // catchError 00:37:31.598 [Pipeline] stage 00:37:31.600 [Pipeline] { (Stop VM) 00:37:31.613 [Pipeline] sh 00:37:31.898 + vagrant halt 00:37:34.442 ==> default: Halting domain... 00:37:38.646 [Pipeline] sh 00:37:38.928 + vagrant destroy -f 00:37:41.472 ==> default: Removing domain... 00:37:42.056 [Pipeline] sh 00:37:42.343 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:37:42.353 [Pipeline] } 00:37:42.370 [Pipeline] // stage 00:37:42.375 [Pipeline] } 00:37:42.390 [Pipeline] // dir 00:37:42.396 [Pipeline] } 00:37:42.412 [Pipeline] // wrap 00:37:42.418 [Pipeline] } 00:37:42.431 [Pipeline] // catchError 00:37:42.441 [Pipeline] stage 00:37:42.444 [Pipeline] { (Epilogue) 00:37:42.457 [Pipeline] sh 00:37:42.862 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:48.150 [Pipeline] catchError 00:37:48.152 [Pipeline] { 00:37:48.165 [Pipeline] sh 00:37:48.450 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:48.710 Artifacts sizes are good 00:37:48.721 [Pipeline] } 00:37:48.739 [Pipeline] // catchError 00:37:48.755 [Pipeline] archiveArtifacts 00:37:48.764 Archiving artifacts 00:37:48.877 [Pipeline] cleanWs 00:37:48.889 [WS-CLEANUP] Deleting project workspace... 00:37:48.889 [WS-CLEANUP] Deferred wipeout is used... 00:37:48.896 [WS-CLEANUP] done 00:37:48.898 [Pipeline] } 00:37:48.917 [Pipeline] // stage 00:37:48.923 [Pipeline] } 00:37:48.937 [Pipeline] // node 00:37:48.944 [Pipeline] End of Pipeline 00:37:48.985 Finished: SUCCESS