00:00:00.000 Started by upstream project "autotest-nightly" build number 4155 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3517 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.074 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.075 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.122 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.160 Using shallow fetch with depth 1 00:00:00.160 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.160 > git --version # timeout=10 00:00:00.198 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.220 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.220 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.094 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.105 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.116 Checking out Revision f95f9907808933a1db7196e15e13478e0f322ee7 (FETCH_HEAD) 00:00:06.116 > git config core.sparsecheckout # timeout=10 00:00:06.129 > git read-tree -mu HEAD # timeout=10 00:00:06.143 > git checkout -f f95f9907808933a1db7196e15e13478e0f322ee7 # timeout=5 00:00:06.161 Commit message: "Revert "autotest-phy: replace deprecated label for nvmf-cvl"" 00:00:06.161 > git rev-list --no-walk f95f9907808933a1db7196e15e13478e0f322ee7 # timeout=10 00:00:06.358 [Pipeline] Start of Pipeline 00:00:06.370 [Pipeline] library 00:00:06.371 Loading library shm_lib@master 00:00:06.371 Library shm_lib@master is cached. Copying from home. 00:00:06.385 [Pipeline] node 00:00:06.406 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:06.407 [Pipeline] { 00:00:06.415 [Pipeline] catchError 00:00:06.416 [Pipeline] { 00:00:06.424 [Pipeline] wrap 00:00:06.429 [Pipeline] { 00:00:06.434 [Pipeline] stage 00:00:06.436 [Pipeline] { (Prologue) 00:00:06.447 [Pipeline] echo 00:00:06.448 Node: VM-host-WFP1 00:00:06.452 [Pipeline] cleanWs 00:00:06.461 [WS-CLEANUP] Deleting project workspace... 00:00:06.461 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.466 [WS-CLEANUP] done 00:00:06.695 [Pipeline] setCustomBuildProperty 00:00:06.790 [Pipeline] httpRequest 00:00:07.219 [Pipeline] echo 00:00:07.220 Sorcerer 10.211.164.101 is alive 00:00:07.228 [Pipeline] retry 00:00:07.229 [Pipeline] { 00:00:07.238 [Pipeline] httpRequest 00:00:07.241 HttpMethod: GET 00:00:07.242 URL: http://10.211.164.101/packages/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:07.242 Sending request to url: http://10.211.164.101/packages/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:07.244 Response Code: HTTP/1.1 200 OK 00:00:07.244 Success: Status code 200 is in the accepted range: 200,404 00:00:07.244 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:08.403 [Pipeline] } 00:00:08.417 [Pipeline] // retry 00:00:08.425 [Pipeline] sh 00:00:08.710 + tar --no-same-owner -xf jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:08.724 [Pipeline] httpRequest 00:00:09.397 [Pipeline] echo 00:00:09.399 Sorcerer 10.211.164.101 is alive 00:00:09.406 [Pipeline] retry 00:00:09.408 [Pipeline] { 00:00:09.420 [Pipeline] httpRequest 00:00:09.424 HttpMethod: GET 00:00:09.425 URL: http://10.211.164.101/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:09.425 Sending request to url: http://10.211.164.101/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:09.448 Response Code: HTTP/1.1 200 OK 00:00:09.449 Success: Status code 200 is in the accepted range: 200,404 00:00:09.449 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:03:09.355 [Pipeline] } 00:03:09.372 [Pipeline] // retry 00:03:09.380 [Pipeline] sh 00:03:09.662 + tar --no-same-owner -xf spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:03:12.215 [Pipeline] sh 00:03:12.592 + git -C spdk log --oneline -n5 00:03:12.592 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:03:12.592 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:03:12.592 82c46626a lib/event: implement scheduler trace events 00:03:12.592 fa6aec495 lib/thread: register thread owner type for scheduler trace events 00:03:12.592 1876d41a3 include/spdk_internal: define scheduler tracegroup and tracepoints 00:03:12.615 [Pipeline] writeFile 00:03:12.630 [Pipeline] sh 00:03:12.915 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:12.928 [Pipeline] sh 00:03:13.213 + cat autorun-spdk.conf 00:03:13.213 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:13.213 SPDK_TEST_NVME=1 00:03:13.213 SPDK_TEST_FTL=1 00:03:13.213 SPDK_TEST_ISAL=1 00:03:13.213 SPDK_RUN_ASAN=1 00:03:13.213 SPDK_RUN_UBSAN=1 00:03:13.213 SPDK_TEST_XNVME=1 00:03:13.213 SPDK_TEST_NVME_FDP=1 00:03:13.213 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:13.220 RUN_NIGHTLY=1 00:03:13.222 [Pipeline] } 00:03:13.236 [Pipeline] // stage 00:03:13.250 [Pipeline] stage 00:03:13.252 [Pipeline] { (Run VM) 00:03:13.269 [Pipeline] sh 00:03:13.551 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:13.551 + echo 'Start stage prepare_nvme.sh' 00:03:13.551 Start stage prepare_nvme.sh 00:03:13.551 + [[ -n 2 ]] 00:03:13.551 + disk_prefix=ex2 00:03:13.551 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:03:13.551 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:03:13.551 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:03:13.551 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:13.551 ++ SPDK_TEST_NVME=1 00:03:13.551 ++ SPDK_TEST_FTL=1 00:03:13.551 ++ SPDK_TEST_ISAL=1 00:03:13.551 ++ SPDK_RUN_ASAN=1 00:03:13.551 ++ SPDK_RUN_UBSAN=1 00:03:13.551 ++ SPDK_TEST_XNVME=1 00:03:13.551 ++ SPDK_TEST_NVME_FDP=1 00:03:13.551 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:13.551 ++ RUN_NIGHTLY=1 00:03:13.551 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:03:13.551 + nvme_files=() 00:03:13.551 + declare -A nvme_files 00:03:13.551 + backend_dir=/var/lib/libvirt/images/backends 00:03:13.551 + nvme_files['nvme.img']=5G 00:03:13.551 + nvme_files['nvme-cmb.img']=5G 00:03:13.551 + nvme_files['nvme-multi0.img']=4G 00:03:13.551 + nvme_files['nvme-multi1.img']=4G 00:03:13.551 + nvme_files['nvme-multi2.img']=4G 00:03:13.551 + nvme_files['nvme-openstack.img']=8G 00:03:13.551 + nvme_files['nvme-zns.img']=5G 00:03:13.551 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:13.551 + (( SPDK_TEST_FTL == 1 )) 00:03:13.551 + nvme_files["nvme-ftl.img"]=6G 00:03:13.551 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:13.551 + nvme_files["nvme-fdp.img"]=1G 00:03:13.551 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:13.551 + for nvme in "${!nvme_files[@]}" 00:03:13.551 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:03:13.551 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:13.552 + for nvme in "${!nvme_files[@]}" 00:03:13.552 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:03:13.811 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:13.811 + for nvme in "${!nvme_files[@]}" 00:03:13.811 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:03:13.811 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:13.811 + for nvme in "${!nvme_files[@]}" 00:03:13.811 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:03:13.811 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:13.811 + for nvme in "${!nvme_files[@]}" 00:03:13.811 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:03:13.811 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:13.811 + for nvme in "${!nvme_files[@]}" 00:03:13.811 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:03:14.070 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:14.070 + for nvme in "${!nvme_files[@]}" 00:03:14.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:03:14.328 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:14.328 + for nvme in "${!nvme_files[@]}" 00:03:14.328 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:03:14.328 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:14.328 + for nvme in "${!nvme_files[@]}" 00:03:14.329 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:03:14.588 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:14.588 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:03:14.588 + echo 'End stage prepare_nvme.sh' 00:03:14.588 End stage prepare_nvme.sh 00:03:14.600 [Pipeline] sh 00:03:14.882 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:14.882 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:03:14.882 00:03:14.882 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:03:14.882 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:03:14.882 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:03:14.882 HELP=0 00:03:14.882 DRY_RUN=0 00:03:14.882 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:03:14.882 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:14.882 NVME_AUTO_CREATE=0 00:03:14.882 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:03:14.882 NVME_CMB=,,,, 00:03:14.882 NVME_PMR=,,,, 00:03:14.882 NVME_ZNS=,,,, 00:03:14.882 NVME_MS=true,,,, 00:03:14.883 NVME_FDP=,,,on, 00:03:14.883 SPDK_VAGRANT_DISTRO=fedora39 00:03:14.883 SPDK_VAGRANT_VMCPU=10 00:03:14.883 SPDK_VAGRANT_VMRAM=12288 00:03:14.883 SPDK_VAGRANT_PROVIDER=libvirt 00:03:14.883 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:14.883 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:14.883 SPDK_OPENSTACK_NETWORK=0 00:03:14.883 VAGRANT_PACKAGE_BOX=0 00:03:14.883 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:03:14.883 FORCE_DISTRO=true 00:03:14.883 VAGRANT_BOX_VERSION= 00:03:14.883 EXTRA_VAGRANTFILES= 00:03:14.883 NIC_MODEL=e1000 00:03:14.883 00:03:14.883 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:03:14.883 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:03:17.417 Bringing machine 'default' up with 'libvirt' provider... 00:03:18.355 ==> default: Creating image (snapshot of base box volume). 00:03:18.614 ==> default: Creating domain with the following settings... 00:03:18.614 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728303220_4881a2467752b7cc7f42 00:03:18.614 ==> default: -- Domain type: kvm 00:03:18.614 ==> default: -- Cpus: 10 00:03:18.614 ==> default: -- Feature: acpi 00:03:18.614 ==> default: -- Feature: apic 00:03:18.614 ==> default: -- Feature: pae 00:03:18.614 ==> default: -- Memory: 12288M 00:03:18.614 ==> default: -- Memory Backing: hugepages: 00:03:18.614 ==> default: -- Management MAC: 00:03:18.614 ==> default: -- Loader: 00:03:18.614 ==> default: -- Nvram: 00:03:18.614 ==> default: -- Base box: spdk/fedora39 00:03:18.614 ==> default: -- Storage pool: default 00:03:18.614 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728303220_4881a2467752b7cc7f42.img (20G) 00:03:18.614 ==> default: -- Volume Cache: default 00:03:18.614 ==> default: -- Kernel: 00:03:18.614 ==> default: -- Initrd: 00:03:18.614 ==> default: -- Graphics Type: vnc 00:03:18.614 ==> default: -- Graphics Port: -1 00:03:18.614 ==> default: -- Graphics IP: 127.0.0.1 00:03:18.614 ==> default: -- Graphics Password: Not defined 00:03:18.614 ==> default: -- Video Type: cirrus 00:03:18.614 ==> default: -- Video VRAM: 9216 00:03:18.614 ==> default: -- Sound Type: 00:03:18.614 ==> default: -- Keymap: en-us 00:03:18.614 ==> default: -- TPM Path: 00:03:18.614 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:18.614 ==> default: -- Command line args: 00:03:18.614 ==> default: -> value=-device, 00:03:18.614 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:18.614 ==> default: -> value=-drive, 00:03:18.614 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:18.614 ==> default: -> value=-device, 00:03:18.614 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:18.614 ==> default: -> value=-device, 00:03:18.614 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:18.614 ==> default: -> value=-drive, 00:03:18.614 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:03:18.614 ==> default: -> value=-device, 00:03:18.614 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:18.614 ==> default: -> value=-device, 00:03:18.614 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:18.614 ==> default: -> value=-drive, 00:03:18.614 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:18.614 ==> default: -> value=-device, 00:03:18.615 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:18.615 ==> default: -> value=-drive, 00:03:18.615 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:18.615 ==> default: -> value=-device, 00:03:18.615 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:18.615 ==> default: -> value=-drive, 00:03:18.615 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:18.615 ==> default: -> value=-device, 00:03:18.615 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:18.615 ==> default: -> value=-device, 00:03:18.615 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:18.615 ==> default: -> value=-device, 00:03:18.615 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:18.615 ==> default: -> value=-drive, 00:03:18.615 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:18.615 ==> default: -> value=-device, 00:03:18.615 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:19.183 ==> default: Creating shared folders metadata... 00:03:19.183 ==> default: Starting domain. 00:03:21.091 ==> default: Waiting for domain to get an IP address... 00:03:36.008 ==> default: Waiting for SSH to become available... 00:03:37.386 ==> default: Configuring and enabling network interfaces... 00:03:42.653 default: SSH address: 192.168.121.154:22 00:03:42.653 default: SSH username: vagrant 00:03:42.653 default: SSH auth method: private key 00:03:45.942 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:54.061 ==> default: Mounting SSHFS shared folder... 00:03:55.966 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:55.966 ==> default: Checking Mount.. 00:03:57.871 ==> default: Folder Successfully Mounted! 00:03:57.871 ==> default: Running provisioner: file... 00:03:58.809 default: ~/.gitconfig => .gitconfig 00:03:59.069 00:03:59.069 SUCCESS! 00:03:59.069 00:03:59.069 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:03:59.069 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:59.069 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:03:59.069 00:03:59.079 [Pipeline] } 00:03:59.097 [Pipeline] // stage 00:03:59.107 [Pipeline] dir 00:03:59.107 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:03:59.109 [Pipeline] { 00:03:59.122 [Pipeline] catchError 00:03:59.124 [Pipeline] { 00:03:59.138 [Pipeline] sh 00:03:59.421 + vagrant ssh-config --host vagrant 00:03:59.421 + sed -ne /^Host/,$p 00:03:59.421 + tee ssh_conf 00:04:02.727 Host vagrant 00:04:02.727 HostName 192.168.121.154 00:04:02.727 User vagrant 00:04:02.727 Port 22 00:04:02.727 UserKnownHostsFile /dev/null 00:04:02.727 StrictHostKeyChecking no 00:04:02.727 PasswordAuthentication no 00:04:02.727 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:02.727 IdentitiesOnly yes 00:04:02.727 LogLevel FATAL 00:04:02.727 ForwardAgent yes 00:04:02.727 ForwardX11 yes 00:04:02.727 00:04:02.746 [Pipeline] withEnv 00:04:02.748 [Pipeline] { 00:04:02.760 [Pipeline] sh 00:04:03.040 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:03.040 source /etc/os-release 00:04:03.040 [[ -e /image.version ]] && img=$(< /image.version) 00:04:03.040 # Minimal, systemd-like check. 00:04:03.040 if [[ -e /.dockerenv ]]; then 00:04:03.040 # Clear garbage from the node's name: 00:04:03.040 # agt-er_autotest_547-896 -> autotest_547-896 00:04:03.040 # $HOSTNAME is the actual container id 00:04:03.040 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:03.040 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:03.040 # We can assume this is a mount from a host where container is running, 00:04:03.040 # so fetch its hostname to easily identify the target swarm worker. 00:04:03.040 container="$(< /etc/hostname) ($agent)" 00:04:03.040 else 00:04:03.040 # Fallback 00:04:03.040 container=$agent 00:04:03.040 fi 00:04:03.040 fi 00:04:03.040 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:03.040 00:04:03.311 [Pipeline] } 00:04:03.327 [Pipeline] // withEnv 00:04:03.337 [Pipeline] setCustomBuildProperty 00:04:03.351 [Pipeline] stage 00:04:03.354 [Pipeline] { (Tests) 00:04:03.388 [Pipeline] sh 00:04:03.669 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:03.941 [Pipeline] sh 00:04:04.259 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:04.533 [Pipeline] timeout 00:04:04.533 Timeout set to expire in 50 min 00:04:04.535 [Pipeline] { 00:04:04.549 [Pipeline] sh 00:04:04.830 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:05.397 HEAD is now at 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:04:05.409 [Pipeline] sh 00:04:05.691 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:05.962 [Pipeline] sh 00:04:06.241 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:06.516 [Pipeline] sh 00:04:06.798 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:07.058 ++ readlink -f spdk_repo 00:04:07.058 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:07.058 + [[ -n /home/vagrant/spdk_repo ]] 00:04:07.058 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:07.058 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:07.058 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:07.058 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:07.058 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:07.058 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:07.058 + cd /home/vagrant/spdk_repo 00:04:07.058 + source /etc/os-release 00:04:07.058 ++ NAME='Fedora Linux' 00:04:07.058 ++ VERSION='39 (Cloud Edition)' 00:04:07.058 ++ ID=fedora 00:04:07.058 ++ VERSION_ID=39 00:04:07.058 ++ VERSION_CODENAME= 00:04:07.058 ++ PLATFORM_ID=platform:f39 00:04:07.058 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:07.058 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:07.058 ++ LOGO=fedora-logo-icon 00:04:07.058 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:07.058 ++ HOME_URL=https://fedoraproject.org/ 00:04:07.058 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:07.058 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:07.058 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:07.058 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:07.058 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:07.058 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:07.058 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:07.058 ++ SUPPORT_END=2024-11-12 00:04:07.058 ++ VARIANT='Cloud Edition' 00:04:07.058 ++ VARIANT_ID=cloud 00:04:07.058 + uname -a 00:04:07.058 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:07.058 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:07.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.887 Hugepages 00:04:07.887 node hugesize free / total 00:04:07.887 node0 1048576kB 0 / 0 00:04:07.887 node0 2048kB 0 / 0 00:04:07.888 00:04:07.888 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.888 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:07.888 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:07.888 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:07.888 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:07.888 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:07.888 + rm -f /tmp/spdk-ld-path 00:04:07.888 + source autorun-spdk.conf 00:04:07.888 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:07.888 ++ SPDK_TEST_NVME=1 00:04:07.888 ++ SPDK_TEST_FTL=1 00:04:07.888 ++ SPDK_TEST_ISAL=1 00:04:07.888 ++ SPDK_RUN_ASAN=1 00:04:07.888 ++ SPDK_RUN_UBSAN=1 00:04:07.888 ++ SPDK_TEST_XNVME=1 00:04:07.888 ++ SPDK_TEST_NVME_FDP=1 00:04:07.888 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:07.888 ++ RUN_NIGHTLY=1 00:04:07.888 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:07.888 + [[ -n '' ]] 00:04:07.888 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:08.147 + for M in /var/spdk/build-*-manifest.txt 00:04:08.147 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:08.147 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:08.147 + for M in /var/spdk/build-*-manifest.txt 00:04:08.147 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:08.147 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:08.147 + for M in /var/spdk/build-*-manifest.txt 00:04:08.147 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:08.147 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:08.147 ++ uname 00:04:08.147 + [[ Linux == \L\i\n\u\x ]] 00:04:08.147 + sudo dmesg -T 00:04:08.147 + sudo dmesg --clear 00:04:08.147 + dmesg_pid=5241 00:04:08.147 + [[ Fedora Linux == FreeBSD ]] 00:04:08.147 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:08.147 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:08.147 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:08.147 + sudo dmesg -Tw 00:04:08.147 + [[ -x /usr/src/fio-static/fio ]] 00:04:08.147 + export FIO_BIN=/usr/src/fio-static/fio 00:04:08.148 + FIO_BIN=/usr/src/fio-static/fio 00:04:08.148 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:08.148 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:08.148 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:08.148 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:08.148 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:08.148 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:08.148 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:08.148 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:08.148 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:08.148 Test configuration: 00:04:08.148 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:08.148 SPDK_TEST_NVME=1 00:04:08.148 SPDK_TEST_FTL=1 00:04:08.148 SPDK_TEST_ISAL=1 00:04:08.148 SPDK_RUN_ASAN=1 00:04:08.148 SPDK_RUN_UBSAN=1 00:04:08.148 SPDK_TEST_XNVME=1 00:04:08.148 SPDK_TEST_NVME_FDP=1 00:04:08.148 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:08.408 RUN_NIGHTLY=1 12:14:31 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:04:08.408 12:14:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:08.408 12:14:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:08.408 12:14:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:08.408 12:14:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.408 12:14:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.408 12:14:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.408 12:14:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.408 12:14:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.408 12:14:31 -- paths/export.sh@5 -- $ export PATH 00:04:08.408 12:14:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.408 12:14:31 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:08.408 12:14:31 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:08.408 12:14:31 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728303271.XXXXXX 00:04:08.408 12:14:31 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728303271.6WhORA 00:04:08.408 12:14:31 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:08.408 12:14:31 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:08.408 12:14:31 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:08.408 12:14:31 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:08.408 12:14:31 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:08.408 12:14:31 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:08.408 12:14:31 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:08.408 12:14:31 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.408 12:14:31 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:08.408 12:14:31 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:08.408 12:14:31 -- pm/common@17 -- $ local monitor 00:04:08.408 12:14:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.408 12:14:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.408 12:14:31 -- pm/common@25 -- $ sleep 1 00:04:08.408 12:14:31 -- pm/common@21 -- $ date +%s 00:04:08.408 12:14:31 -- pm/common@21 -- $ date +%s 00:04:08.408 12:14:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728303271 00:04:08.408 12:14:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728303271 00:04:08.408 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728303271_collect-cpu-load.pm.log 00:04:08.408 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728303271_collect-vmstat.pm.log 00:04:09.347 12:14:32 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:09.347 12:14:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:09.347 12:14:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:09.347 12:14:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:09.347 12:14:32 -- spdk/autobuild.sh@16 -- $ date -u 00:04:09.347 Mon Oct 7 12:14:32 PM UTC 2024 00:04:09.347 12:14:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:09.347 v25.01-pre-35-g3950cd1bb 00:04:09.347 12:14:32 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:09.347 12:14:32 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:09.347 12:14:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:09.347 12:14:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:09.347 12:14:32 -- common/autotest_common.sh@10 -- $ set +x 00:04:09.347 ************************************ 00:04:09.347 START TEST asan 00:04:09.347 ************************************ 00:04:09.347 using asan 00:04:09.347 12:14:32 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:04:09.347 00:04:09.347 real 0m0.000s 00:04:09.347 user 0m0.000s 00:04:09.347 sys 0m0.000s 00:04:09.347 12:14:32 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:09.347 12:14:32 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:09.347 ************************************ 00:04:09.347 END TEST asan 00:04:09.347 ************************************ 00:04:09.347 12:14:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:09.347 12:14:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:09.347 12:14:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:09.347 12:14:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:09.347 12:14:32 -- common/autotest_common.sh@10 -- $ set +x 00:04:09.347 ************************************ 00:04:09.347 START TEST ubsan 00:04:09.347 ************************************ 00:04:09.347 using ubsan 00:04:09.347 12:14:32 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:04:09.347 00:04:09.347 real 0m0.000s 00:04:09.347 user 0m0.000s 00:04:09.347 sys 0m0.000s 00:04:09.347 12:14:32 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:09.347 12:14:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:09.347 ************************************ 00:04:09.347 END TEST ubsan 00:04:09.347 ************************************ 00:04:09.607 12:14:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:09.607 12:14:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:09.607 12:14:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:09.607 12:14:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:09.607 12:14:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:09.607 12:14:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:09.607 12:14:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:09.607 12:14:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:09.607 12:14:32 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:04:09.607 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:09.607 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:10.176 Using 'verbs' RDMA provider 00:04:26.002 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:44.099 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:44.099 Creating mk/config.mk...done. 00:04:44.099 Creating mk/cc.flags.mk...done. 00:04:44.099 Type 'make' to build. 00:04:44.099 12:15:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:44.099 12:15:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:44.099 12:15:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:44.099 12:15:05 -- common/autotest_common.sh@10 -- $ set +x 00:04:44.099 ************************************ 00:04:44.099 START TEST make 00:04:44.099 ************************************ 00:04:44.099 12:15:05 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:44.099 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:44.099 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:44.099 meson setup builddir \ 00:04:44.099 -Dwith-libaio=enabled \ 00:04:44.099 -Dwith-liburing=enabled \ 00:04:44.099 -Dwith-libvfn=disabled \ 00:04:44.099 -Dwith-spdk=false && \ 00:04:44.099 meson compile -C builddir && \ 00:04:44.099 cd -) 00:04:44.099 make[1]: Nothing to be done for 'all'. 00:04:44.668 The Meson build system 00:04:44.668 Version: 1.5.0 00:04:44.668 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:44.668 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:44.668 Build type: native build 00:04:44.668 Project name: xnvme 00:04:44.668 Project version: 0.7.3 00:04:44.668 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:44.668 C linker for the host machine: cc ld.bfd 2.40-14 00:04:44.668 Host machine cpu family: x86_64 00:04:44.668 Host machine cpu: x86_64 00:04:44.668 Message: host_machine.system: linux 00:04:44.668 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:44.668 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:44.668 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:44.668 Run-time dependency threads found: YES 00:04:44.668 Has header "setupapi.h" : NO 00:04:44.668 Has header "linux/blkzoned.h" : YES 00:04:44.668 Has header "linux/blkzoned.h" : YES (cached) 00:04:44.668 Has header "libaio.h" : YES 00:04:44.668 Library aio found: YES 00:04:44.668 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:44.668 Run-time dependency liburing found: YES 2.2 00:04:44.668 Dependency libvfn skipped: feature with-libvfn disabled 00:04:44.668 Run-time dependency appleframeworks found: NO (tried framework) 00:04:44.668 Run-time dependency appleframeworks found: NO (tried framework) 00:04:44.668 Configuring xnvme_config.h using configuration 00:04:44.668 Configuring xnvme.spec using configuration 00:04:44.668 Run-time dependency bash-completion found: YES 2.11 00:04:44.668 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:44.668 Program cp found: YES (/usr/bin/cp) 00:04:44.668 Has header "winsock2.h" : NO 00:04:44.668 Has header "dbghelp.h" : NO 00:04:44.668 Library rpcrt4 found: NO 00:04:44.668 Library rt found: YES 00:04:44.668 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:44.668 Found CMake: /usr/bin/cmake (3.27.7) 00:04:44.668 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:44.668 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:44.668 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:44.668 Build targets in project: 32 00:04:44.668 00:04:44.668 xnvme 0.7.3 00:04:44.668 00:04:44.668 User defined options 00:04:44.668 with-libaio : enabled 00:04:44.668 with-liburing: enabled 00:04:44.668 with-libvfn : disabled 00:04:44.668 with-spdk : false 00:04:44.668 00:04:44.668 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:45.244 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:45.244 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:45.244 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:45.244 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:45.244 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:45.244 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:45.244 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:45.244 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:45.245 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:45.245 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:45.245 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:45.245 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:45.245 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:45.245 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:45.245 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:45.506 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:45.506 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:45.506 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:45.506 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:45.506 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:45.506 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:45.506 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:45.506 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:45.506 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:45.506 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:45.506 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:45.506 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:45.506 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:45.506 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:45.506 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:45.506 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:45.506 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:45.506 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:45.506 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:45.506 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:45.506 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:45.506 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:45.506 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:45.506 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:45.506 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:45.506 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:45.506 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:45.506 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:45.506 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:45.506 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:45.506 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:45.506 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:45.506 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:45.765 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:45.765 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:45.765 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:45.765 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:45.765 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:45.765 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:45.765 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:45.765 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:45.765 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:45.765 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:45.765 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:45.765 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:45.765 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:45.765 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:45.765 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:45.765 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:45.765 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:45.765 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:45.765 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:45.765 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:45.765 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:45.766 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:46.025 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:46.025 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:46.025 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:46.025 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:46.025 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:46.025 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:46.025 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:46.025 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:46.025 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:46.025 [79/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:46.025 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:46.025 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:46.025 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:46.025 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:46.284 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:46.284 [85/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:46.284 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:46.284 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:46.284 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:46.284 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:46.284 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:46.284 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:46.284 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:46.284 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:46.284 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:46.284 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:46.284 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:46.284 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:46.284 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:46.284 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:46.284 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:46.284 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:46.284 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:46.284 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:46.284 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:46.284 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:46.284 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:46.284 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:46.284 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:46.284 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:46.284 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:46.284 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:46.284 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:46.284 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:46.284 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:46.284 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:46.284 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:46.284 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:46.284 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:46.284 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:46.284 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:46.543 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:46.543 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:46.543 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:46.543 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:46.543 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:46.543 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:46.543 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:46.543 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:46.543 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:46.543 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:46.543 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:46.543 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:46.543 [133/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:46.543 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:46.543 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:46.543 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:46.543 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:46.543 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:46.543 [139/203] Linking target lib/libxnvme.so 00:04:46.543 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:46.802 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:46.803 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:46.803 [143/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:46.803 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:46.803 [145/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:46.803 [146/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:46.803 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:46.803 [148/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:46.803 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:46.803 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:46.803 [151/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:46.803 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:46.803 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:46.803 [154/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:46.803 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:47.061 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:47.061 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:47.061 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:47.061 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:47.061 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:47.061 [161/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:47.061 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:47.061 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:47.061 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:47.061 [165/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:47.061 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:47.061 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:47.061 [168/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:47.320 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:47.320 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:47.320 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:47.320 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:47.320 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:47.320 [174/203] Linking static target lib/libxnvme.a 00:04:47.320 [175/203] Linking target tests/xnvme_tests_cli 00:04:47.320 [176/203] Linking target tests/xnvme_tests_buf 00:04:47.320 [177/203] Linking target tests/xnvme_tests_lblk 00:04:47.320 [178/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:47.320 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:04:47.320 [180/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:47.320 [181/203] Linking target tests/xnvme_tests_scc 00:04:47.320 [182/203] Linking target tests/xnvme_tests_znd_append 00:04:47.320 [183/203] Linking target tests/xnvme_tests_ioworker 00:04:47.320 [184/203] Linking target tests/xnvme_tests_async_intf 00:04:47.320 [185/203] Linking target tests/xnvme_tests_znd_state 00:04:47.320 [186/203] Linking target tests/xnvme_tests_enum 00:04:47.320 [187/203] Linking target tests/xnvme_tests_kvs 00:04:47.320 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:47.320 [189/203] Linking target tests/xnvme_tests_map 00:04:47.320 [190/203] Linking target tools/lblk 00:04:47.320 [191/203] Linking target tools/xdd 00:04:47.320 [192/203] Linking target tools/xnvme 00:04:47.320 [193/203] Linking target tools/xnvme_file 00:04:47.320 [194/203] Linking target examples/xnvme_dev 00:04:47.320 [195/203] Linking target tools/zoned 00:04:47.320 [196/203] Linking target examples/xnvme_single_sync 00:04:47.320 [197/203] Linking target examples/xnvme_single_async 00:04:47.320 [198/203] Linking target examples/xnvme_hello 00:04:47.320 [199/203] Linking target tools/kvs 00:04:47.320 [200/203] Linking target examples/xnvme_enum 00:04:47.320 [201/203] Linking target examples/zoned_io_async 00:04:47.320 [202/203] Linking target examples/xnvme_io_async 00:04:47.320 [203/203] Linking target examples/zoned_io_sync 00:04:47.623 INFO: autodetecting backend as ninja 00:04:47.623 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:47.623 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:54.190 The Meson build system 00:04:54.190 Version: 1.5.0 00:04:54.190 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:54.190 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:54.190 Build type: native build 00:04:54.190 Program cat found: YES (/usr/bin/cat) 00:04:54.190 Project name: DPDK 00:04:54.190 Project version: 24.03.0 00:04:54.190 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:54.190 C linker for the host machine: cc ld.bfd 2.40-14 00:04:54.190 Host machine cpu family: x86_64 00:04:54.190 Host machine cpu: x86_64 00:04:54.190 Message: ## Building in Developer Mode ## 00:04:54.190 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:54.190 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:54.190 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:54.190 Program python3 found: YES (/usr/bin/python3) 00:04:54.190 Program cat found: YES (/usr/bin/cat) 00:04:54.190 Compiler for C supports arguments -march=native: YES 00:04:54.190 Checking for size of "void *" : 8 00:04:54.190 Checking for size of "void *" : 8 (cached) 00:04:54.190 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:54.190 Library m found: YES 00:04:54.190 Library numa found: YES 00:04:54.190 Has header "numaif.h" : YES 00:04:54.190 Library fdt found: NO 00:04:54.190 Library execinfo found: NO 00:04:54.190 Has header "execinfo.h" : YES 00:04:54.190 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:54.190 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:54.190 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:54.190 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:54.190 Run-time dependency openssl found: YES 3.1.1 00:04:54.190 Run-time dependency libpcap found: YES 1.10.4 00:04:54.190 Has header "pcap.h" with dependency libpcap: YES 00:04:54.190 Compiler for C supports arguments -Wcast-qual: YES 00:04:54.190 Compiler for C supports arguments -Wdeprecated: YES 00:04:54.190 Compiler for C supports arguments -Wformat: YES 00:04:54.190 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:54.190 Compiler for C supports arguments -Wformat-security: NO 00:04:54.190 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:54.190 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:54.190 Compiler for C supports arguments -Wnested-externs: YES 00:04:54.190 Compiler for C supports arguments -Wold-style-definition: YES 00:04:54.190 Compiler for C supports arguments -Wpointer-arith: YES 00:04:54.190 Compiler for C supports arguments -Wsign-compare: YES 00:04:54.191 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:54.191 Compiler for C supports arguments -Wundef: YES 00:04:54.191 Compiler for C supports arguments -Wwrite-strings: YES 00:04:54.191 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:54.191 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:54.191 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:54.191 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:54.191 Program objdump found: YES (/usr/bin/objdump) 00:04:54.191 Compiler for C supports arguments -mavx512f: YES 00:04:54.191 Checking if "AVX512 checking" compiles: YES 00:04:54.191 Fetching value of define "__SSE4_2__" : 1 00:04:54.191 Fetching value of define "__AES__" : 1 00:04:54.191 Fetching value of define "__AVX__" : 1 00:04:54.191 Fetching value of define "__AVX2__" : 1 00:04:54.191 Fetching value of define "__AVX512BW__" : 1 00:04:54.191 Fetching value of define "__AVX512CD__" : 1 00:04:54.191 Fetching value of define "__AVX512DQ__" : 1 00:04:54.191 Fetching value of define "__AVX512F__" : 1 00:04:54.191 Fetching value of define "__AVX512VL__" : 1 00:04:54.191 Fetching value of define "__PCLMUL__" : 1 00:04:54.191 Fetching value of define "__RDRND__" : 1 00:04:54.191 Fetching value of define "__RDSEED__" : 1 00:04:54.191 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:54.191 Fetching value of define "__znver1__" : (undefined) 00:04:54.191 Fetching value of define "__znver2__" : (undefined) 00:04:54.191 Fetching value of define "__znver3__" : (undefined) 00:04:54.191 Fetching value of define "__znver4__" : (undefined) 00:04:54.191 Library asan found: YES 00:04:54.191 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:54.191 Message: lib/log: Defining dependency "log" 00:04:54.191 Message: lib/kvargs: Defining dependency "kvargs" 00:04:54.191 Message: lib/telemetry: Defining dependency "telemetry" 00:04:54.191 Library rt found: YES 00:04:54.191 Checking for function "getentropy" : NO 00:04:54.191 Message: lib/eal: Defining dependency "eal" 00:04:54.191 Message: lib/ring: Defining dependency "ring" 00:04:54.191 Message: lib/rcu: Defining dependency "rcu" 00:04:54.191 Message: lib/mempool: Defining dependency "mempool" 00:04:54.191 Message: lib/mbuf: Defining dependency "mbuf" 00:04:54.191 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:54.191 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:54.191 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:54.191 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:54.191 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:54.191 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:54.191 Compiler for C supports arguments -mpclmul: YES 00:04:54.191 Compiler for C supports arguments -maes: YES 00:04:54.191 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:54.191 Compiler for C supports arguments -mavx512bw: YES 00:04:54.191 Compiler for C supports arguments -mavx512dq: YES 00:04:54.191 Compiler for C supports arguments -mavx512vl: YES 00:04:54.191 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:54.191 Compiler for C supports arguments -mavx2: YES 00:04:54.191 Compiler for C supports arguments -mavx: YES 00:04:54.191 Message: lib/net: Defining dependency "net" 00:04:54.191 Message: lib/meter: Defining dependency "meter" 00:04:54.191 Message: lib/ethdev: Defining dependency "ethdev" 00:04:54.191 Message: lib/pci: Defining dependency "pci" 00:04:54.191 Message: lib/cmdline: Defining dependency "cmdline" 00:04:54.191 Message: lib/hash: Defining dependency "hash" 00:04:54.191 Message: lib/timer: Defining dependency "timer" 00:04:54.191 Message: lib/compressdev: Defining dependency "compressdev" 00:04:54.191 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:54.191 Message: lib/dmadev: Defining dependency "dmadev" 00:04:54.191 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:54.191 Message: lib/power: Defining dependency "power" 00:04:54.191 Message: lib/reorder: Defining dependency "reorder" 00:04:54.191 Message: lib/security: Defining dependency "security" 00:04:54.191 Has header "linux/userfaultfd.h" : YES 00:04:54.191 Has header "linux/vduse.h" : YES 00:04:54.191 Message: lib/vhost: Defining dependency "vhost" 00:04:54.191 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:54.191 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:54.191 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:54.191 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:54.191 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:54.191 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:54.191 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:54.191 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:54.191 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:54.191 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:54.191 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:54.191 Configuring doxy-api-html.conf using configuration 00:04:54.191 Configuring doxy-api-man.conf using configuration 00:04:54.191 Program mandb found: YES (/usr/bin/mandb) 00:04:54.191 Program sphinx-build found: NO 00:04:54.191 Configuring rte_build_config.h using configuration 00:04:54.191 Message: 00:04:54.191 ================= 00:04:54.191 Applications Enabled 00:04:54.191 ================= 00:04:54.191 00:04:54.191 apps: 00:04:54.191 00:04:54.191 00:04:54.191 Message: 00:04:54.191 ================= 00:04:54.191 Libraries Enabled 00:04:54.191 ================= 00:04:54.191 00:04:54.191 libs: 00:04:54.191 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:54.191 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:54.191 cryptodev, dmadev, power, reorder, security, vhost, 00:04:54.191 00:04:54.191 Message: 00:04:54.191 =============== 00:04:54.191 Drivers Enabled 00:04:54.191 =============== 00:04:54.191 00:04:54.191 common: 00:04:54.191 00:04:54.191 bus: 00:04:54.191 pci, vdev, 00:04:54.191 mempool: 00:04:54.191 ring, 00:04:54.191 dma: 00:04:54.191 00:04:54.191 net: 00:04:54.191 00:04:54.191 crypto: 00:04:54.191 00:04:54.191 compress: 00:04:54.191 00:04:54.191 vdpa: 00:04:54.191 00:04:54.191 00:04:54.191 Message: 00:04:54.191 ================= 00:04:54.191 Content Skipped 00:04:54.191 ================= 00:04:54.191 00:04:54.191 apps: 00:04:54.191 dumpcap: explicitly disabled via build config 00:04:54.191 graph: explicitly disabled via build config 00:04:54.191 pdump: explicitly disabled via build config 00:04:54.191 proc-info: explicitly disabled via build config 00:04:54.191 test-acl: explicitly disabled via build config 00:04:54.191 test-bbdev: explicitly disabled via build config 00:04:54.191 test-cmdline: explicitly disabled via build config 00:04:54.191 test-compress-perf: explicitly disabled via build config 00:04:54.191 test-crypto-perf: explicitly disabled via build config 00:04:54.191 test-dma-perf: explicitly disabled via build config 00:04:54.191 test-eventdev: explicitly disabled via build config 00:04:54.191 test-fib: explicitly disabled via build config 00:04:54.191 test-flow-perf: explicitly disabled via build config 00:04:54.191 test-gpudev: explicitly disabled via build config 00:04:54.191 test-mldev: explicitly disabled via build config 00:04:54.191 test-pipeline: explicitly disabled via build config 00:04:54.191 test-pmd: explicitly disabled via build config 00:04:54.191 test-regex: explicitly disabled via build config 00:04:54.191 test-sad: explicitly disabled via build config 00:04:54.191 test-security-perf: explicitly disabled via build config 00:04:54.191 00:04:54.191 libs: 00:04:54.191 argparse: explicitly disabled via build config 00:04:54.191 metrics: explicitly disabled via build config 00:04:54.191 acl: explicitly disabled via build config 00:04:54.191 bbdev: explicitly disabled via build config 00:04:54.191 bitratestats: explicitly disabled via build config 00:04:54.191 bpf: explicitly disabled via build config 00:04:54.191 cfgfile: explicitly disabled via build config 00:04:54.191 distributor: explicitly disabled via build config 00:04:54.191 efd: explicitly disabled via build config 00:04:54.191 eventdev: explicitly disabled via build config 00:04:54.191 dispatcher: explicitly disabled via build config 00:04:54.191 gpudev: explicitly disabled via build config 00:04:54.191 gro: explicitly disabled via build config 00:04:54.191 gso: explicitly disabled via build config 00:04:54.191 ip_frag: explicitly disabled via build config 00:04:54.191 jobstats: explicitly disabled via build config 00:04:54.191 latencystats: explicitly disabled via build config 00:04:54.191 lpm: explicitly disabled via build config 00:04:54.191 member: explicitly disabled via build config 00:04:54.191 pcapng: explicitly disabled via build config 00:04:54.191 rawdev: explicitly disabled via build config 00:04:54.191 regexdev: explicitly disabled via build config 00:04:54.191 mldev: explicitly disabled via build config 00:04:54.191 rib: explicitly disabled via build config 00:04:54.191 sched: explicitly disabled via build config 00:04:54.191 stack: explicitly disabled via build config 00:04:54.191 ipsec: explicitly disabled via build config 00:04:54.191 pdcp: explicitly disabled via build config 00:04:54.191 fib: explicitly disabled via build config 00:04:54.191 port: explicitly disabled via build config 00:04:54.191 pdump: explicitly disabled via build config 00:04:54.191 table: explicitly disabled via build config 00:04:54.191 pipeline: explicitly disabled via build config 00:04:54.191 graph: explicitly disabled via build config 00:04:54.191 node: explicitly disabled via build config 00:04:54.191 00:04:54.191 drivers: 00:04:54.191 common/cpt: not in enabled drivers build config 00:04:54.191 common/dpaax: not in enabled drivers build config 00:04:54.191 common/iavf: not in enabled drivers build config 00:04:54.191 common/idpf: not in enabled drivers build config 00:04:54.191 common/ionic: not in enabled drivers build config 00:04:54.191 common/mvep: not in enabled drivers build config 00:04:54.191 common/octeontx: not in enabled drivers build config 00:04:54.191 bus/auxiliary: not in enabled drivers build config 00:04:54.191 bus/cdx: not in enabled drivers build config 00:04:54.191 bus/dpaa: not in enabled drivers build config 00:04:54.191 bus/fslmc: not in enabled drivers build config 00:04:54.191 bus/ifpga: not in enabled drivers build config 00:04:54.191 bus/platform: not in enabled drivers build config 00:04:54.191 bus/uacce: not in enabled drivers build config 00:04:54.192 bus/vmbus: not in enabled drivers build config 00:04:54.192 common/cnxk: not in enabled drivers build config 00:04:54.192 common/mlx5: not in enabled drivers build config 00:04:54.192 common/nfp: not in enabled drivers build config 00:04:54.192 common/nitrox: not in enabled drivers build config 00:04:54.192 common/qat: not in enabled drivers build config 00:04:54.192 common/sfc_efx: not in enabled drivers build config 00:04:54.192 mempool/bucket: not in enabled drivers build config 00:04:54.192 mempool/cnxk: not in enabled drivers build config 00:04:54.192 mempool/dpaa: not in enabled drivers build config 00:04:54.192 mempool/dpaa2: not in enabled drivers build config 00:04:54.192 mempool/octeontx: not in enabled drivers build config 00:04:54.192 mempool/stack: not in enabled drivers build config 00:04:54.192 dma/cnxk: not in enabled drivers build config 00:04:54.192 dma/dpaa: not in enabled drivers build config 00:04:54.192 dma/dpaa2: not in enabled drivers build config 00:04:54.192 dma/hisilicon: not in enabled drivers build config 00:04:54.192 dma/idxd: not in enabled drivers build config 00:04:54.192 dma/ioat: not in enabled drivers build config 00:04:54.192 dma/skeleton: not in enabled drivers build config 00:04:54.192 net/af_packet: not in enabled drivers build config 00:04:54.192 net/af_xdp: not in enabled drivers build config 00:04:54.192 net/ark: not in enabled drivers build config 00:04:54.192 net/atlantic: not in enabled drivers build config 00:04:54.192 net/avp: not in enabled drivers build config 00:04:54.192 net/axgbe: not in enabled drivers build config 00:04:54.192 net/bnx2x: not in enabled drivers build config 00:04:54.192 net/bnxt: not in enabled drivers build config 00:04:54.192 net/bonding: not in enabled drivers build config 00:04:54.192 net/cnxk: not in enabled drivers build config 00:04:54.192 net/cpfl: not in enabled drivers build config 00:04:54.192 net/cxgbe: not in enabled drivers build config 00:04:54.192 net/dpaa: not in enabled drivers build config 00:04:54.192 net/dpaa2: not in enabled drivers build config 00:04:54.192 net/e1000: not in enabled drivers build config 00:04:54.192 net/ena: not in enabled drivers build config 00:04:54.192 net/enetc: not in enabled drivers build config 00:04:54.192 net/enetfec: not in enabled drivers build config 00:04:54.192 net/enic: not in enabled drivers build config 00:04:54.192 net/failsafe: not in enabled drivers build config 00:04:54.192 net/fm10k: not in enabled drivers build config 00:04:54.192 net/gve: not in enabled drivers build config 00:04:54.192 net/hinic: not in enabled drivers build config 00:04:54.192 net/hns3: not in enabled drivers build config 00:04:54.192 net/i40e: not in enabled drivers build config 00:04:54.192 net/iavf: not in enabled drivers build config 00:04:54.192 net/ice: not in enabled drivers build config 00:04:54.192 net/idpf: not in enabled drivers build config 00:04:54.192 net/igc: not in enabled drivers build config 00:04:54.192 net/ionic: not in enabled drivers build config 00:04:54.192 net/ipn3ke: not in enabled drivers build config 00:04:54.192 net/ixgbe: not in enabled drivers build config 00:04:54.192 net/mana: not in enabled drivers build config 00:04:54.192 net/memif: not in enabled drivers build config 00:04:54.192 net/mlx4: not in enabled drivers build config 00:04:54.192 net/mlx5: not in enabled drivers build config 00:04:54.192 net/mvneta: not in enabled drivers build config 00:04:54.192 net/mvpp2: not in enabled drivers build config 00:04:54.192 net/netvsc: not in enabled drivers build config 00:04:54.192 net/nfb: not in enabled drivers build config 00:04:54.192 net/nfp: not in enabled drivers build config 00:04:54.192 net/ngbe: not in enabled drivers build config 00:04:54.192 net/null: not in enabled drivers build config 00:04:54.192 net/octeontx: not in enabled drivers build config 00:04:54.192 net/octeon_ep: not in enabled drivers build config 00:04:54.192 net/pcap: not in enabled drivers build config 00:04:54.192 net/pfe: not in enabled drivers build config 00:04:54.192 net/qede: not in enabled drivers build config 00:04:54.192 net/ring: not in enabled drivers build config 00:04:54.192 net/sfc: not in enabled drivers build config 00:04:54.192 net/softnic: not in enabled drivers build config 00:04:54.192 net/tap: not in enabled drivers build config 00:04:54.192 net/thunderx: not in enabled drivers build config 00:04:54.192 net/txgbe: not in enabled drivers build config 00:04:54.192 net/vdev_netvsc: not in enabled drivers build config 00:04:54.192 net/vhost: not in enabled drivers build config 00:04:54.192 net/virtio: not in enabled drivers build config 00:04:54.192 net/vmxnet3: not in enabled drivers build config 00:04:54.192 raw/*: missing internal dependency, "rawdev" 00:04:54.192 crypto/armv8: not in enabled drivers build config 00:04:54.192 crypto/bcmfs: not in enabled drivers build config 00:04:54.192 crypto/caam_jr: not in enabled drivers build config 00:04:54.192 crypto/ccp: not in enabled drivers build config 00:04:54.192 crypto/cnxk: not in enabled drivers build config 00:04:54.192 crypto/dpaa_sec: not in enabled drivers build config 00:04:54.192 crypto/dpaa2_sec: not in enabled drivers build config 00:04:54.192 crypto/ipsec_mb: not in enabled drivers build config 00:04:54.192 crypto/mlx5: not in enabled drivers build config 00:04:54.192 crypto/mvsam: not in enabled drivers build config 00:04:54.192 crypto/nitrox: not in enabled drivers build config 00:04:54.192 crypto/null: not in enabled drivers build config 00:04:54.192 crypto/octeontx: not in enabled drivers build config 00:04:54.192 crypto/openssl: not in enabled drivers build config 00:04:54.192 crypto/scheduler: not in enabled drivers build config 00:04:54.192 crypto/uadk: not in enabled drivers build config 00:04:54.192 crypto/virtio: not in enabled drivers build config 00:04:54.192 compress/isal: not in enabled drivers build config 00:04:54.192 compress/mlx5: not in enabled drivers build config 00:04:54.192 compress/nitrox: not in enabled drivers build config 00:04:54.192 compress/octeontx: not in enabled drivers build config 00:04:54.192 compress/zlib: not in enabled drivers build config 00:04:54.192 regex/*: missing internal dependency, "regexdev" 00:04:54.192 ml/*: missing internal dependency, "mldev" 00:04:54.192 vdpa/ifc: not in enabled drivers build config 00:04:54.192 vdpa/mlx5: not in enabled drivers build config 00:04:54.192 vdpa/nfp: not in enabled drivers build config 00:04:54.192 vdpa/sfc: not in enabled drivers build config 00:04:54.192 event/*: missing internal dependency, "eventdev" 00:04:54.192 baseband/*: missing internal dependency, "bbdev" 00:04:54.192 gpu/*: missing internal dependency, "gpudev" 00:04:54.192 00:04:54.192 00:04:54.192 Build targets in project: 85 00:04:54.192 00:04:54.192 DPDK 24.03.0 00:04:54.192 00:04:54.192 User defined options 00:04:54.192 buildtype : debug 00:04:54.192 default_library : shared 00:04:54.192 libdir : lib 00:04:54.192 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:54.192 b_sanitize : address 00:04:54.192 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:54.192 c_link_args : 00:04:54.192 cpu_instruction_set: native 00:04:54.192 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:54.192 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:54.192 enable_docs : false 00:04:54.192 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:54.192 enable_kmods : false 00:04:54.192 max_lcores : 128 00:04:54.192 tests : false 00:04:54.192 00:04:54.192 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:54.192 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:54.452 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:54.452 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:54.452 [3/268] Linking static target lib/librte_kvargs.a 00:04:54.452 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:54.452 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:54.452 [6/268] Linking static target lib/librte_log.a 00:04:54.711 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:54.711 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:54.711 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:54.971 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:54.971 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:54.971 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.971 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:54.971 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:54.971 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:54.971 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:54.971 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:54.971 [18/268] Linking static target lib/librte_telemetry.a 00:04:55.238 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:55.499 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:55.499 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:55.499 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:55.499 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:55.499 [24/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.499 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:55.499 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:55.499 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:55.499 [28/268] Linking target lib/librte_log.so.24.1 00:04:55.499 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:55.758 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:55.758 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:55.758 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.016 [33/268] Linking target lib/librte_kvargs.so.24.1 00:04:56.016 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:56.016 [35/268] Linking target lib/librte_telemetry.so.24.1 00:04:56.016 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:56.016 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:56.016 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:56.016 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:56.016 [40/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:56.016 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:56.016 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:56.016 [43/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:56.276 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:56.276 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:56.276 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:56.276 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:56.535 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:56.535 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:56.535 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:56.535 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:56.535 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:56.794 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:56.794 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:56.794 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:56.794 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:56.794 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:57.053 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:57.053 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:57.053 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:57.053 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:57.053 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:57.312 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:57.312 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:57.312 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:57.312 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:57.312 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:57.571 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:57.571 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:57.571 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:57.865 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:57.865 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:57.865 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:57.865 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:57.865 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:57.865 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:57.865 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:58.125 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:58.125 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:58.125 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:58.125 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:58.125 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:58.125 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:58.384 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:58.384 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:58.384 [86/268] Linking static target lib/librte_ring.a 00:04:58.384 [87/268] Linking static target lib/librte_eal.a 00:04:58.643 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:58.643 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:58.643 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:58.643 [91/268] Linking static target lib/librte_rcu.a 00:04:58.643 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:58.643 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:58.643 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:58.902 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.902 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:58.902 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:58.902 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:58.902 [99/268] Linking static target lib/librte_mempool.a 00:04:58.902 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:59.161 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:59.161 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:59.161 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.161 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:59.419 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:59.419 [106/268] Linking static target lib/librte_mbuf.a 00:04:59.419 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:59.419 [108/268] Linking static target lib/librte_meter.a 00:04:59.419 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:59.419 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:59.419 [111/268] Linking static target lib/librte_net.a 00:04:59.419 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:59.678 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:59.678 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:59.678 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.937 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:59.937 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.937 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:00.195 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:00.195 [120/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.195 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:00.454 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:00.454 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.454 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:00.712 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:00.713 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:00.713 [127/268] Linking static target lib/librte_pci.a 00:05:00.713 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:00.713 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:00.713 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:00.713 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:00.971 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:00.971 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:00.971 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:00.972 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.972 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:00.972 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:00.972 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:00.972 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:00.972 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:00.972 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:00.972 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:01.231 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:01.231 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:01.231 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:01.490 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:01.490 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:01.490 [148/268] Linking static target lib/librte_cmdline.a 00:05:01.490 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:01.490 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:01.490 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:01.490 [152/268] Linking static target lib/librte_timer.a 00:05:01.749 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:01.749 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:02.008 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:02.008 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:02.008 [157/268] Linking static target lib/librte_ethdev.a 00:05:02.008 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:02.008 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:02.008 [160/268] Linking static target lib/librte_compressdev.a 00:05:02.266 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:02.266 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.266 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:02.266 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:02.266 [165/268] Linking static target lib/librte_hash.a 00:05:02.266 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:02.526 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:02.526 [168/268] Linking static target lib/librte_dmadev.a 00:05:02.526 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:02.785 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:02.785 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:02.785 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:03.043 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:03.043 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.043 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.302 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:03.302 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:03.302 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:03.302 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:03.302 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:03.302 [181/268] Linking static target lib/librte_cryptodev.a 00:05:03.302 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.561 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:03.562 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:03.562 [185/268] Linking static target lib/librte_power.a 00:05:03.562 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.820 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:03.820 [188/268] Linking static target lib/librte_reorder.a 00:05:03.820 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:04.079 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:04.079 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:04.079 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:04.079 [193/268] Linking static target lib/librte_security.a 00:05:04.337 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.337 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:04.906 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.906 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.906 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:04.906 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:04.906 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:05.164 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:05.164 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:05.164 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:05.423 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:05.423 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:05.423 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:05.682 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:05.682 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:05.682 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:05.682 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:05.940 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.940 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:05.941 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:05.941 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:05.941 [215/268] Linking static target drivers/librte_bus_vdev.a 00:05:05.941 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:05.941 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:05.941 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:05.941 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:05.941 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:05.941 [221/268] Linking static target drivers/librte_bus_pci.a 00:05:06.200 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:06.200 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:06.200 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:06.200 [225/268] Linking static target drivers/librte_mempool_ring.a 00:05:06.200 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.459 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.027 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:10.320 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:10.320 [230/268] Linking static target lib/librte_vhost.a 00:05:10.579 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.838 [232/268] Linking target lib/librte_eal.so.24.1 00:05:10.838 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:10.838 [234/268] Linking target lib/librte_dmadev.so.24.1 00:05:10.838 [235/268] Linking target lib/librte_ring.so.24.1 00:05:10.838 [236/268] Linking target lib/librte_meter.so.24.1 00:05:11.102 [237/268] Linking target lib/librte_timer.so.24.1 00:05:11.102 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:11.102 [239/268] Linking target lib/librte_pci.so.24.1 00:05:11.102 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:11.102 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:11.102 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:11.102 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:11.102 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:11.102 [245/268] Linking target lib/librte_mempool.so.24.1 00:05:11.102 [246/268] Linking target lib/librte_rcu.so.24.1 00:05:11.102 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:11.102 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.102 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:11.361 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:11.361 [251/268] Linking target lib/librte_mbuf.so.24.1 00:05:11.361 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:11.361 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:11.361 [254/268] Linking target lib/librte_compressdev.so.24.1 00:05:11.361 [255/268] Linking target lib/librte_reorder.so.24.1 00:05:11.361 [256/268] Linking target lib/librte_net.so.24.1 00:05:11.361 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:05:11.619 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:11.619 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:11.619 [260/268] Linking target lib/librte_hash.so.24.1 00:05:11.619 [261/268] Linking target lib/librte_cmdline.so.24.1 00:05:11.619 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:11.619 [263/268] Linking target lib/librte_security.so.24.1 00:05:11.879 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:11.879 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:11.879 [266/268] Linking target lib/librte_power.so.24.1 00:05:12.448 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.448 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:12.448 INFO: autodetecting backend as ninja 00:05:12.448 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:30.700 CC lib/ut/ut.o 00:05:30.700 CC lib/log/log.o 00:05:30.700 CC lib/log/log_flags.o 00:05:30.700 CC lib/log/log_deprecated.o 00:05:30.700 CC lib/ut_mock/mock.o 00:05:30.700 LIB libspdk_ut_mock.a 00:05:30.700 LIB libspdk_ut.a 00:05:30.700 SO libspdk_ut_mock.so.6.0 00:05:30.700 SO libspdk_ut.so.2.0 00:05:30.700 LIB libspdk_log.a 00:05:30.700 SYMLINK libspdk_ut_mock.so 00:05:30.700 SYMLINK libspdk_ut.so 00:05:30.700 SO libspdk_log.so.7.0 00:05:30.700 SYMLINK libspdk_log.so 00:05:30.700 CC lib/util/base64.o 00:05:30.700 CC lib/util/bit_array.o 00:05:30.700 CC lib/util/cpuset.o 00:05:30.701 CC lib/util/crc32.o 00:05:30.701 CC lib/util/crc16.o 00:05:30.701 CC lib/util/crc32c.o 00:05:30.701 CC lib/ioat/ioat.o 00:05:30.701 CXX lib/trace_parser/trace.o 00:05:30.701 CC lib/dma/dma.o 00:05:30.701 CC lib/vfio_user/host/vfio_user_pci.o 00:05:30.701 CC lib/vfio_user/host/vfio_user.o 00:05:30.701 CC lib/util/crc32_ieee.o 00:05:30.701 CC lib/util/crc64.o 00:05:30.701 CC lib/util/dif.o 00:05:30.701 CC lib/util/fd.o 00:05:30.701 LIB libspdk_dma.a 00:05:30.701 CC lib/util/fd_group.o 00:05:30.701 CC lib/util/file.o 00:05:30.701 SO libspdk_dma.so.5.0 00:05:30.701 CC lib/util/hexlify.o 00:05:30.701 LIB libspdk_ioat.a 00:05:30.701 SYMLINK libspdk_dma.so 00:05:30.701 CC lib/util/iov.o 00:05:30.701 CC lib/util/math.o 00:05:30.701 SO libspdk_ioat.so.7.0 00:05:30.701 CC lib/util/net.o 00:05:30.701 LIB libspdk_vfio_user.a 00:05:30.701 SYMLINK libspdk_ioat.so 00:05:30.701 CC lib/util/pipe.o 00:05:30.701 CC lib/util/strerror_tls.o 00:05:30.701 SO libspdk_vfio_user.so.5.0 00:05:30.701 CC lib/util/string.o 00:05:30.701 CC lib/util/uuid.o 00:05:30.701 CC lib/util/xor.o 00:05:30.701 SYMLINK libspdk_vfio_user.so 00:05:30.701 CC lib/util/zipf.o 00:05:30.701 CC lib/util/md5.o 00:05:30.701 LIB libspdk_util.a 00:05:30.701 SO libspdk_util.so.10.0 00:05:30.701 LIB libspdk_trace_parser.a 00:05:30.701 SO libspdk_trace_parser.so.6.0 00:05:30.701 SYMLINK libspdk_util.so 00:05:30.701 SYMLINK libspdk_trace_parser.so 00:05:30.701 CC lib/rdma_provider/common.o 00:05:30.701 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:30.701 CC lib/idxd/idxd.o 00:05:30.701 CC lib/idxd/idxd_user.o 00:05:30.701 CC lib/json/json_parse.o 00:05:30.701 CC lib/idxd/idxd_kernel.o 00:05:30.701 CC lib/rdma_utils/rdma_utils.o 00:05:30.701 CC lib/vmd/vmd.o 00:05:30.701 CC lib/env_dpdk/env.o 00:05:30.701 CC lib/conf/conf.o 00:05:30.701 CC lib/json/json_util.o 00:05:30.701 CC lib/json/json_write.o 00:05:30.701 LIB libspdk_rdma_provider.a 00:05:30.701 SO libspdk_rdma_provider.so.6.0 00:05:30.701 LIB libspdk_conf.a 00:05:30.701 CC lib/vmd/led.o 00:05:30.701 CC lib/env_dpdk/memory.o 00:05:30.701 SO libspdk_conf.so.6.0 00:05:30.701 LIB libspdk_rdma_utils.a 00:05:30.701 SYMLINK libspdk_rdma_provider.so 00:05:30.701 CC lib/env_dpdk/pci.o 00:05:30.701 SO libspdk_rdma_utils.so.1.0 00:05:30.701 SYMLINK libspdk_conf.so 00:05:30.701 CC lib/env_dpdk/init.o 00:05:30.701 SYMLINK libspdk_rdma_utils.so 00:05:30.701 CC lib/env_dpdk/threads.o 00:05:30.701 CC lib/env_dpdk/pci_ioat.o 00:05:30.701 CC lib/env_dpdk/pci_virtio.o 00:05:30.701 LIB libspdk_json.a 00:05:30.701 SO libspdk_json.so.6.0 00:05:30.701 CC lib/env_dpdk/pci_vmd.o 00:05:30.701 CC lib/env_dpdk/pci_idxd.o 00:05:30.701 CC lib/env_dpdk/pci_event.o 00:05:30.960 SYMLINK libspdk_json.so 00:05:30.960 CC lib/env_dpdk/sigbus_handler.o 00:05:30.960 CC lib/env_dpdk/pci_dpdk.o 00:05:30.960 LIB libspdk_idxd.a 00:05:30.960 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:30.960 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:30.960 SO libspdk_idxd.so.12.1 00:05:30.960 LIB libspdk_vmd.a 00:05:30.960 SO libspdk_vmd.so.6.0 00:05:30.960 SYMLINK libspdk_idxd.so 00:05:31.220 SYMLINK libspdk_vmd.so 00:05:31.220 CC lib/jsonrpc/jsonrpc_server.o 00:05:31.220 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:31.220 CC lib/jsonrpc/jsonrpc_client.o 00:05:31.220 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:31.479 LIB libspdk_jsonrpc.a 00:05:31.479 SO libspdk_jsonrpc.so.6.0 00:05:31.479 SYMLINK libspdk_jsonrpc.so 00:05:31.744 LIB libspdk_env_dpdk.a 00:05:32.004 SO libspdk_env_dpdk.so.15.0 00:05:32.004 CC lib/rpc/rpc.o 00:05:32.004 SYMLINK libspdk_env_dpdk.so 00:05:32.263 LIB libspdk_rpc.a 00:05:32.263 SO libspdk_rpc.so.6.0 00:05:32.263 SYMLINK libspdk_rpc.so 00:05:32.832 CC lib/trace/trace_rpc.o 00:05:32.832 CC lib/trace/trace_flags.o 00:05:32.832 CC lib/trace/trace.o 00:05:32.832 CC lib/notify/notify.o 00:05:32.832 CC lib/notify/notify_rpc.o 00:05:32.832 CC lib/keyring/keyring.o 00:05:32.832 CC lib/keyring/keyring_rpc.o 00:05:32.832 LIB libspdk_notify.a 00:05:32.832 SO libspdk_notify.so.6.0 00:05:32.832 LIB libspdk_trace.a 00:05:32.832 LIB libspdk_keyring.a 00:05:32.832 SYMLINK libspdk_notify.so 00:05:33.091 SO libspdk_keyring.so.2.0 00:05:33.091 SO libspdk_trace.so.11.0 00:05:33.091 SYMLINK libspdk_keyring.so 00:05:33.091 SYMLINK libspdk_trace.so 00:05:33.350 CC lib/thread/thread.o 00:05:33.350 CC lib/thread/iobuf.o 00:05:33.350 CC lib/sock/sock.o 00:05:33.350 CC lib/sock/sock_rpc.o 00:05:33.919 LIB libspdk_sock.a 00:05:33.919 SO libspdk_sock.so.10.0 00:05:33.919 SYMLINK libspdk_sock.so 00:05:34.488 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:34.488 CC lib/nvme/nvme_ctrlr.o 00:05:34.488 CC lib/nvme/nvme_fabric.o 00:05:34.488 CC lib/nvme/nvme_ns_cmd.o 00:05:34.488 CC lib/nvme/nvme_ns.o 00:05:34.488 CC lib/nvme/nvme_pcie_common.o 00:05:34.488 CC lib/nvme/nvme_qpair.o 00:05:34.488 CC lib/nvme/nvme_pcie.o 00:05:34.488 CC lib/nvme/nvme.o 00:05:35.058 LIB libspdk_thread.a 00:05:35.058 CC lib/nvme/nvme_quirks.o 00:05:35.058 CC lib/nvme/nvme_transport.o 00:05:35.058 SO libspdk_thread.so.10.2 00:05:35.058 CC lib/nvme/nvme_discovery.o 00:05:35.058 SYMLINK libspdk_thread.so 00:05:35.058 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:35.316 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:35.316 CC lib/nvme/nvme_tcp.o 00:05:35.316 CC lib/accel/accel.o 00:05:35.316 CC lib/nvme/nvme_opal.o 00:05:35.575 CC lib/nvme/nvme_io_msg.o 00:05:35.575 CC lib/nvme/nvme_poll_group.o 00:05:35.575 CC lib/nvme/nvme_zns.o 00:05:35.575 CC lib/nvme/nvme_stubs.o 00:05:35.834 CC lib/nvme/nvme_auth.o 00:05:35.834 CC lib/nvme/nvme_cuse.o 00:05:35.834 CC lib/nvme/nvme_rdma.o 00:05:36.094 CC lib/accel/accel_rpc.o 00:05:36.094 CC lib/accel/accel_sw.o 00:05:36.094 CC lib/blob/blobstore.o 00:05:36.354 CC lib/blob/request.o 00:05:36.354 CC lib/init/json_config.o 00:05:36.354 CC lib/init/subsystem.o 00:05:36.614 LIB libspdk_accel.a 00:05:36.614 CC lib/init/subsystem_rpc.o 00:05:36.614 SO libspdk_accel.so.16.0 00:05:36.614 CC lib/init/rpc.o 00:05:36.614 SYMLINK libspdk_accel.so 00:05:36.614 CC lib/blob/zeroes.o 00:05:36.614 CC lib/blob/blob_bs_dev.o 00:05:36.873 LIB libspdk_init.a 00:05:36.873 SO libspdk_init.so.6.0 00:05:36.873 CC lib/fsdev/fsdev.o 00:05:36.873 CC lib/bdev/bdev.o 00:05:36.873 CC lib/virtio/virtio.o 00:05:36.873 CC lib/virtio/virtio_vhost_user.o 00:05:36.873 CC lib/virtio/virtio_vfio_user.o 00:05:36.873 SYMLINK libspdk_init.so 00:05:36.873 CC lib/virtio/virtio_pci.o 00:05:36.873 CC lib/bdev/bdev_rpc.o 00:05:37.132 CC lib/event/app.o 00:05:37.132 CC lib/event/reactor.o 00:05:37.132 CC lib/bdev/bdev_zone.o 00:05:37.132 CC lib/bdev/part.o 00:05:37.132 LIB libspdk_virtio.a 00:05:37.391 SO libspdk_virtio.so.7.0 00:05:37.391 CC lib/bdev/scsi_nvme.o 00:05:37.391 SYMLINK libspdk_virtio.so 00:05:37.391 CC lib/event/log_rpc.o 00:05:37.391 CC lib/fsdev/fsdev_io.o 00:05:37.391 LIB libspdk_nvme.a 00:05:37.391 CC lib/event/app_rpc.o 00:05:37.391 CC lib/event/scheduler_static.o 00:05:37.651 CC lib/fsdev/fsdev_rpc.o 00:05:37.651 SO libspdk_nvme.so.14.0 00:05:37.651 LIB libspdk_fsdev.a 00:05:37.912 SO libspdk_fsdev.so.1.0 00:05:37.912 LIB libspdk_event.a 00:05:37.912 SO libspdk_event.so.15.0 00:05:37.912 SYMLINK libspdk_fsdev.so 00:05:37.912 SYMLINK libspdk_nvme.so 00:05:37.912 SYMLINK libspdk_event.so 00:05:38.171 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:39.139 LIB libspdk_fuse_dispatcher.a 00:05:39.139 SO libspdk_fuse_dispatcher.so.1.0 00:05:39.139 SYMLINK libspdk_fuse_dispatcher.so 00:05:39.707 LIB libspdk_blob.a 00:05:39.707 SO libspdk_blob.so.11.0 00:05:39.707 LIB libspdk_bdev.a 00:05:39.967 SO libspdk_bdev.so.17.0 00:05:39.967 SYMLINK libspdk_blob.so 00:05:39.967 SYMLINK libspdk_bdev.so 00:05:40.226 CC lib/lvol/lvol.o 00:05:40.226 CC lib/blobfs/blobfs.o 00:05:40.226 CC lib/blobfs/tree.o 00:05:40.226 CC lib/ublk/ublk.o 00:05:40.226 CC lib/ublk/ublk_rpc.o 00:05:40.226 CC lib/nvmf/ctrlr.o 00:05:40.226 CC lib/nvmf/ctrlr_discovery.o 00:05:40.227 CC lib/ftl/ftl_core.o 00:05:40.227 CC lib/scsi/dev.o 00:05:40.227 CC lib/nbd/nbd.o 00:05:40.227 CC lib/scsi/lun.o 00:05:40.486 CC lib/ftl/ftl_init.o 00:05:40.486 CC lib/ftl/ftl_layout.o 00:05:40.486 CC lib/nbd/nbd_rpc.o 00:05:40.486 CC lib/scsi/port.o 00:05:40.745 CC lib/scsi/scsi.o 00:05:40.745 CC lib/scsi/scsi_bdev.o 00:05:40.745 CC lib/nvmf/ctrlr_bdev.o 00:05:40.745 LIB libspdk_nbd.a 00:05:40.745 CC lib/ftl/ftl_debug.o 00:05:40.745 CC lib/scsi/scsi_pr.o 00:05:40.745 SO libspdk_nbd.so.7.0 00:05:40.745 CC lib/scsi/scsi_rpc.o 00:05:40.745 SYMLINK libspdk_nbd.so 00:05:40.745 CC lib/ftl/ftl_io.o 00:05:41.005 LIB libspdk_ublk.a 00:05:41.005 CC lib/ftl/ftl_sb.o 00:05:41.005 SO libspdk_ublk.so.3.0 00:05:41.005 CC lib/ftl/ftl_l2p.o 00:05:41.005 SYMLINK libspdk_ublk.so 00:05:41.005 CC lib/scsi/task.o 00:05:41.005 LIB libspdk_blobfs.a 00:05:41.005 SO libspdk_blobfs.so.10.0 00:05:41.005 CC lib/ftl/ftl_l2p_flat.o 00:05:41.005 CC lib/ftl/ftl_nv_cache.o 00:05:41.264 CC lib/ftl/ftl_band.o 00:05:41.264 CC lib/ftl/ftl_band_ops.o 00:05:41.264 SYMLINK libspdk_blobfs.so 00:05:41.264 CC lib/ftl/ftl_writer.o 00:05:41.264 LIB libspdk_lvol.a 00:05:41.264 CC lib/nvmf/subsystem.o 00:05:41.264 SO libspdk_lvol.so.10.0 00:05:41.264 LIB libspdk_scsi.a 00:05:41.264 SO libspdk_scsi.so.9.0 00:05:41.264 SYMLINK libspdk_lvol.so 00:05:41.264 CC lib/nvmf/nvmf.o 00:05:41.264 CC lib/ftl/ftl_rq.o 00:05:41.522 SYMLINK libspdk_scsi.so 00:05:41.522 CC lib/nvmf/nvmf_rpc.o 00:05:41.522 CC lib/nvmf/transport.o 00:05:41.522 CC lib/nvmf/tcp.o 00:05:41.522 CC lib/nvmf/stubs.o 00:05:41.522 CC lib/iscsi/conn.o 00:05:41.522 CC lib/vhost/vhost.o 00:05:42.091 CC lib/vhost/vhost_rpc.o 00:05:42.091 CC lib/ftl/ftl_reloc.o 00:05:42.091 CC lib/ftl/ftl_l2p_cache.o 00:05:42.091 CC lib/iscsi/init_grp.o 00:05:42.091 CC lib/nvmf/mdns_server.o 00:05:42.350 CC lib/iscsi/iscsi.o 00:05:42.350 CC lib/iscsi/param.o 00:05:42.350 CC lib/iscsi/portal_grp.o 00:05:42.350 CC lib/nvmf/rdma.o 00:05:42.609 CC lib/nvmf/auth.o 00:05:42.609 CC lib/iscsi/tgt_node.o 00:05:42.609 CC lib/vhost/vhost_scsi.o 00:05:42.609 CC lib/iscsi/iscsi_subsystem.o 00:05:42.609 CC lib/iscsi/iscsi_rpc.o 00:05:42.609 CC lib/iscsi/task.o 00:05:42.868 CC lib/ftl/ftl_p2l.o 00:05:42.868 CC lib/vhost/vhost_blk.o 00:05:43.127 CC lib/ftl/ftl_p2l_log.o 00:05:43.127 CC lib/ftl/mngt/ftl_mngt.o 00:05:43.127 CC lib/vhost/rte_vhost_user.o 00:05:43.127 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:43.386 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:43.386 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:43.386 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:43.386 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:43.386 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:43.386 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:43.386 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:43.645 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:43.645 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:43.645 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:43.645 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:43.645 CC lib/ftl/utils/ftl_conf.o 00:05:43.645 CC lib/ftl/utils/ftl_md.o 00:05:43.645 LIB libspdk_iscsi.a 00:05:43.904 CC lib/ftl/utils/ftl_mempool.o 00:05:43.904 CC lib/ftl/utils/ftl_bitmap.o 00:05:43.904 SO libspdk_iscsi.so.8.0 00:05:43.904 CC lib/ftl/utils/ftl_property.o 00:05:43.904 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:43.904 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:43.904 SYMLINK libspdk_iscsi.so 00:05:43.904 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:44.163 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:44.163 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:44.163 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:44.163 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:44.163 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:44.163 LIB libspdk_vhost.a 00:05:44.163 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:44.163 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:44.163 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:44.163 SO libspdk_vhost.so.8.0 00:05:44.163 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:44.163 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:44.423 CC lib/ftl/base/ftl_base_dev.o 00:05:44.423 CC lib/ftl/base/ftl_base_bdev.o 00:05:44.423 CC lib/ftl/ftl_trace.o 00:05:44.423 SYMLINK libspdk_vhost.so 00:05:44.682 LIB libspdk_ftl.a 00:05:44.941 SO libspdk_ftl.so.9.0 00:05:44.941 LIB libspdk_nvmf.a 00:05:45.210 SO libspdk_nvmf.so.19.0 00:05:45.210 SYMLINK libspdk_ftl.so 00:05:45.471 SYMLINK libspdk_nvmf.so 00:05:45.731 CC module/env_dpdk/env_dpdk_rpc.o 00:05:45.731 CC module/scheduler/gscheduler/gscheduler.o 00:05:45.731 CC module/blob/bdev/blob_bdev.o 00:05:45.731 CC module/accel/error/accel_error.o 00:05:45.731 CC module/fsdev/aio/fsdev_aio.o 00:05:45.731 CC module/keyring/linux/keyring.o 00:05:45.731 CC module/sock/posix/posix.o 00:05:45.731 CC module/keyring/file/keyring.o 00:05:45.731 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:45.731 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:45.990 LIB libspdk_env_dpdk_rpc.a 00:05:45.990 SO libspdk_env_dpdk_rpc.so.6.0 00:05:45.990 CC module/keyring/linux/keyring_rpc.o 00:05:45.990 SYMLINK libspdk_env_dpdk_rpc.so 00:05:45.990 LIB libspdk_scheduler_dpdk_governor.a 00:05:45.990 LIB libspdk_scheduler_gscheduler.a 00:05:45.990 CC module/keyring/file/keyring_rpc.o 00:05:45.990 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:45.990 CC module/accel/error/accel_error_rpc.o 00:05:45.990 SO libspdk_scheduler_gscheduler.so.4.0 00:05:45.990 LIB libspdk_scheduler_dynamic.a 00:05:45.990 SO libspdk_scheduler_dynamic.so.4.0 00:05:45.990 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:45.990 SYMLINK libspdk_scheduler_gscheduler.so 00:05:45.990 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:45.990 CC module/fsdev/aio/linux_aio_mgr.o 00:05:45.990 LIB libspdk_keyring_linux.a 00:05:45.990 LIB libspdk_blob_bdev.a 00:05:45.990 SYMLINK libspdk_scheduler_dynamic.so 00:05:46.250 SO libspdk_keyring_linux.so.1.0 00:05:46.250 CC module/accel/ioat/accel_ioat.o 00:05:46.250 LIB libspdk_accel_error.a 00:05:46.250 LIB libspdk_keyring_file.a 00:05:46.250 SO libspdk_blob_bdev.so.11.0 00:05:46.250 SO libspdk_accel_error.so.2.0 00:05:46.250 SO libspdk_keyring_file.so.2.0 00:05:46.250 SYMLINK libspdk_keyring_linux.so 00:05:46.250 SYMLINK libspdk_blob_bdev.so 00:05:46.250 CC module/accel/ioat/accel_ioat_rpc.o 00:05:46.250 SYMLINK libspdk_keyring_file.so 00:05:46.250 SYMLINK libspdk_accel_error.so 00:05:46.250 CC module/accel/dsa/accel_dsa.o 00:05:46.250 CC module/accel/dsa/accel_dsa_rpc.o 00:05:46.250 LIB libspdk_accel_ioat.a 00:05:46.250 SO libspdk_accel_ioat.so.6.0 00:05:46.509 CC module/accel/iaa/accel_iaa.o 00:05:46.509 SYMLINK libspdk_accel_ioat.so 00:05:46.509 CC module/accel/iaa/accel_iaa_rpc.o 00:05:46.509 CC module/bdev/gpt/gpt.o 00:05:46.509 CC module/bdev/error/vbdev_error.o 00:05:46.509 CC module/bdev/delay/vbdev_delay.o 00:05:46.509 CC module/blobfs/bdev/blobfs_bdev.o 00:05:46.509 LIB libspdk_fsdev_aio.a 00:05:46.509 SO libspdk_fsdev_aio.so.1.0 00:05:46.509 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:46.509 LIB libspdk_accel_dsa.a 00:05:46.509 LIB libspdk_accel_iaa.a 00:05:46.509 CC module/bdev/lvol/vbdev_lvol.o 00:05:46.509 SO libspdk_accel_dsa.so.5.0 00:05:46.509 SO libspdk_accel_iaa.so.3.0 00:05:46.509 LIB libspdk_sock_posix.a 00:05:46.768 SYMLINK libspdk_fsdev_aio.so 00:05:46.768 CC module/bdev/gpt/vbdev_gpt.o 00:05:46.768 SO libspdk_sock_posix.so.6.0 00:05:46.768 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:46.768 SYMLINK libspdk_accel_iaa.so 00:05:46.768 SYMLINK libspdk_accel_dsa.so 00:05:46.768 CC module/bdev/error/vbdev_error_rpc.o 00:05:46.768 SYMLINK libspdk_sock_posix.so 00:05:46.768 CC module/bdev/malloc/bdev_malloc.o 00:05:46.768 LIB libspdk_blobfs_bdev.a 00:05:46.768 LIB libspdk_bdev_delay.a 00:05:46.768 LIB libspdk_bdev_error.a 00:05:46.768 CC module/bdev/null/bdev_null.o 00:05:46.768 SO libspdk_blobfs_bdev.so.6.0 00:05:47.029 CC module/bdev/nvme/bdev_nvme.o 00:05:47.029 SO libspdk_bdev_delay.so.6.0 00:05:47.029 SO libspdk_bdev_error.so.6.0 00:05:47.029 CC module/bdev/passthru/vbdev_passthru.o 00:05:47.029 CC module/bdev/raid/bdev_raid.o 00:05:47.029 SYMLINK libspdk_blobfs_bdev.so 00:05:47.029 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:47.029 LIB libspdk_bdev_gpt.a 00:05:47.029 SYMLINK libspdk_bdev_error.so 00:05:47.029 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:47.029 SYMLINK libspdk_bdev_delay.so 00:05:47.029 CC module/bdev/nvme/nvme_rpc.o 00:05:47.029 SO libspdk_bdev_gpt.so.6.0 00:05:47.029 SYMLINK libspdk_bdev_gpt.so 00:05:47.029 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:47.332 CC module/bdev/null/bdev_null_rpc.o 00:05:47.332 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:47.332 LIB libspdk_bdev_passthru.a 00:05:47.332 CC module/bdev/split/vbdev_split.o 00:05:47.332 SO libspdk_bdev_passthru.so.6.0 00:05:47.332 LIB libspdk_bdev_null.a 00:05:47.332 SO libspdk_bdev_null.so.6.0 00:05:47.332 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:47.332 SYMLINK libspdk_bdev_passthru.so 00:05:47.332 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:47.332 LIB libspdk_bdev_malloc.a 00:05:47.592 SO libspdk_bdev_malloc.so.6.0 00:05:47.592 SYMLINK libspdk_bdev_null.so 00:05:47.592 CC module/bdev/xnvme/bdev_xnvme.o 00:05:47.592 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:47.592 CC module/bdev/split/vbdev_split_rpc.o 00:05:47.592 SYMLINK libspdk_bdev_malloc.so 00:05:47.592 CC module/bdev/raid/bdev_raid_rpc.o 00:05:47.592 LIB libspdk_bdev_lvol.a 00:05:47.592 CC module/bdev/raid/bdev_raid_sb.o 00:05:47.592 SO libspdk_bdev_lvol.so.6.0 00:05:47.592 CC module/bdev/nvme/bdev_mdns_client.o 00:05:47.592 SYMLINK libspdk_bdev_lvol.so 00:05:47.592 LIB libspdk_bdev_split.a 00:05:47.592 CC module/bdev/nvme/vbdev_opal.o 00:05:47.592 SO libspdk_bdev_split.so.6.0 00:05:47.592 LIB libspdk_bdev_xnvme.a 00:05:47.851 LIB libspdk_bdev_zone_block.a 00:05:47.851 SO libspdk_bdev_xnvme.so.3.0 00:05:47.851 SYMLINK libspdk_bdev_split.so 00:05:47.851 CC module/bdev/raid/raid0.o 00:05:47.851 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:47.851 SO libspdk_bdev_zone_block.so.6.0 00:05:47.851 CC module/bdev/aio/bdev_aio.o 00:05:47.851 SYMLINK libspdk_bdev_xnvme.so 00:05:47.851 CC module/bdev/aio/bdev_aio_rpc.o 00:05:47.851 SYMLINK libspdk_bdev_zone_block.so 00:05:47.851 CC module/bdev/raid/raid1.o 00:05:47.851 CC module/bdev/ftl/bdev_ftl.o 00:05:47.851 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:48.110 CC module/bdev/raid/concat.o 00:05:48.110 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:48.110 CC module/bdev/iscsi/bdev_iscsi.o 00:05:48.110 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:48.110 LIB libspdk_bdev_aio.a 00:05:48.110 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:48.110 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:48.110 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:48.110 SO libspdk_bdev_aio.so.6.0 00:05:48.110 LIB libspdk_bdev_raid.a 00:05:48.110 LIB libspdk_bdev_ftl.a 00:05:48.369 SO libspdk_bdev_ftl.so.6.0 00:05:48.369 SYMLINK libspdk_bdev_aio.so 00:05:48.369 SO libspdk_bdev_raid.so.6.0 00:05:48.369 SYMLINK libspdk_bdev_ftl.so 00:05:48.369 LIB libspdk_bdev_iscsi.a 00:05:48.369 SYMLINK libspdk_bdev_raid.so 00:05:48.369 SO libspdk_bdev_iscsi.so.6.0 00:05:48.369 SYMLINK libspdk_bdev_iscsi.so 00:05:48.937 LIB libspdk_bdev_virtio.a 00:05:48.937 SO libspdk_bdev_virtio.so.6.0 00:05:48.937 SYMLINK libspdk_bdev_virtio.so 00:05:49.506 LIB libspdk_bdev_nvme.a 00:05:49.506 SO libspdk_bdev_nvme.so.7.0 00:05:49.765 SYMLINK libspdk_bdev_nvme.so 00:05:50.333 CC module/event/subsystems/iobuf/iobuf.o 00:05:50.333 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:50.333 CC module/event/subsystems/vmd/vmd.o 00:05:50.333 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:50.333 CC module/event/subsystems/sock/sock.o 00:05:50.334 CC module/event/subsystems/keyring/keyring.o 00:05:50.334 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:50.334 CC module/event/subsystems/scheduler/scheduler.o 00:05:50.334 CC module/event/subsystems/fsdev/fsdev.o 00:05:50.334 LIB libspdk_event_keyring.a 00:05:50.334 LIB libspdk_event_vhost_blk.a 00:05:50.334 LIB libspdk_event_sock.a 00:05:50.334 LIB libspdk_event_vmd.a 00:05:50.334 LIB libspdk_event_scheduler.a 00:05:50.334 LIB libspdk_event_fsdev.a 00:05:50.334 LIB libspdk_event_iobuf.a 00:05:50.592 SO libspdk_event_keyring.so.1.0 00:05:50.592 SO libspdk_event_vhost_blk.so.3.0 00:05:50.592 SO libspdk_event_sock.so.5.0 00:05:50.592 SO libspdk_event_vmd.so.6.0 00:05:50.592 SO libspdk_event_scheduler.so.4.0 00:05:50.592 SO libspdk_event_fsdev.so.1.0 00:05:50.592 SO libspdk_event_iobuf.so.3.0 00:05:50.592 SYMLINK libspdk_event_keyring.so 00:05:50.592 SYMLINK libspdk_event_vhost_blk.so 00:05:50.592 SYMLINK libspdk_event_sock.so 00:05:50.592 SYMLINK libspdk_event_scheduler.so 00:05:50.592 SYMLINK libspdk_event_vmd.so 00:05:50.592 SYMLINK libspdk_event_fsdev.so 00:05:50.592 SYMLINK libspdk_event_iobuf.so 00:05:50.852 CC module/event/subsystems/accel/accel.o 00:05:51.110 LIB libspdk_event_accel.a 00:05:51.110 SO libspdk_event_accel.so.6.0 00:05:51.110 SYMLINK libspdk_event_accel.so 00:05:51.678 CC module/event/subsystems/bdev/bdev.o 00:05:51.937 LIB libspdk_event_bdev.a 00:05:51.937 SO libspdk_event_bdev.so.6.0 00:05:51.937 SYMLINK libspdk_event_bdev.so 00:05:52.196 CC module/event/subsystems/nbd/nbd.o 00:05:52.196 CC module/event/subsystems/scsi/scsi.o 00:05:52.196 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:52.196 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:52.454 CC module/event/subsystems/ublk/ublk.o 00:05:52.454 LIB libspdk_event_nbd.a 00:05:52.454 LIB libspdk_event_scsi.a 00:05:52.454 LIB libspdk_event_ublk.a 00:05:52.454 SO libspdk_event_nbd.so.6.0 00:05:52.454 SO libspdk_event_scsi.so.6.0 00:05:52.454 SO libspdk_event_ublk.so.3.0 00:05:52.454 SYMLINK libspdk_event_nbd.so 00:05:52.454 LIB libspdk_event_nvmf.a 00:05:52.454 SYMLINK libspdk_event_scsi.so 00:05:52.713 SO libspdk_event_nvmf.so.6.0 00:05:52.713 SYMLINK libspdk_event_ublk.so 00:05:52.713 SYMLINK libspdk_event_nvmf.so 00:05:52.972 CC module/event/subsystems/iscsi/iscsi.o 00:05:52.972 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:52.972 LIB libspdk_event_iscsi.a 00:05:52.972 LIB libspdk_event_vhost_scsi.a 00:05:53.232 SO libspdk_event_iscsi.so.6.0 00:05:53.232 SO libspdk_event_vhost_scsi.so.3.0 00:05:53.232 SYMLINK libspdk_event_iscsi.so 00:05:53.232 SYMLINK libspdk_event_vhost_scsi.so 00:05:53.491 SO libspdk.so.6.0 00:05:53.491 SYMLINK libspdk.so 00:05:53.750 CC app/trace_record/trace_record.o 00:05:53.750 CXX app/trace/trace.o 00:05:53.750 CC app/iscsi_tgt/iscsi_tgt.o 00:05:53.750 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:53.750 CC app/nvmf_tgt/nvmf_main.o 00:05:53.750 CC app/spdk_tgt/spdk_tgt.o 00:05:53.750 CC examples/ioat/perf/perf.o 00:05:53.750 CC test/thread/poller_perf/poller_perf.o 00:05:53.750 CC examples/util/zipf/zipf.o 00:05:53.750 CC test/dma/test_dma/test_dma.o 00:05:54.018 LINK nvmf_tgt 00:05:54.018 LINK interrupt_tgt 00:05:54.018 LINK poller_perf 00:05:54.018 LINK spdk_tgt 00:05:54.018 LINK spdk_trace_record 00:05:54.018 LINK iscsi_tgt 00:05:54.018 LINK zipf 00:05:54.018 LINK ioat_perf 00:05:54.018 LINK spdk_trace 00:05:54.300 CC app/spdk_lspci/spdk_lspci.o 00:05:54.300 CC examples/ioat/verify/verify.o 00:05:54.300 TEST_HEADER include/spdk/accel.h 00:05:54.300 CC app/spdk_nvme_perf/perf.o 00:05:54.300 TEST_HEADER include/spdk/accel_module.h 00:05:54.300 TEST_HEADER include/spdk/assert.h 00:05:54.300 TEST_HEADER include/spdk/barrier.h 00:05:54.300 TEST_HEADER include/spdk/base64.h 00:05:54.300 TEST_HEADER include/spdk/bdev.h 00:05:54.300 TEST_HEADER include/spdk/bdev_module.h 00:05:54.300 TEST_HEADER include/spdk/bdev_zone.h 00:05:54.300 TEST_HEADER include/spdk/bit_array.h 00:05:54.300 TEST_HEADER include/spdk/bit_pool.h 00:05:54.300 TEST_HEADER include/spdk/blob_bdev.h 00:05:54.300 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:54.300 TEST_HEADER include/spdk/blobfs.h 00:05:54.300 TEST_HEADER include/spdk/blob.h 00:05:54.300 TEST_HEADER include/spdk/conf.h 00:05:54.300 TEST_HEADER include/spdk/config.h 00:05:54.300 TEST_HEADER include/spdk/cpuset.h 00:05:54.300 TEST_HEADER include/spdk/crc16.h 00:05:54.300 CC app/spdk_nvme_discover/discovery_aer.o 00:05:54.300 TEST_HEADER include/spdk/crc32.h 00:05:54.300 TEST_HEADER include/spdk/crc64.h 00:05:54.300 TEST_HEADER include/spdk/dif.h 00:05:54.300 TEST_HEADER include/spdk/dma.h 00:05:54.300 CC app/spdk_nvme_identify/identify.o 00:05:54.300 TEST_HEADER include/spdk/endian.h 00:05:54.300 TEST_HEADER include/spdk/env_dpdk.h 00:05:54.300 TEST_HEADER include/spdk/env.h 00:05:54.300 TEST_HEADER include/spdk/event.h 00:05:54.300 TEST_HEADER include/spdk/fd_group.h 00:05:54.300 TEST_HEADER include/spdk/fd.h 00:05:54.300 TEST_HEADER include/spdk/file.h 00:05:54.300 TEST_HEADER include/spdk/fsdev.h 00:05:54.300 TEST_HEADER include/spdk/fsdev_module.h 00:05:54.300 TEST_HEADER include/spdk/ftl.h 00:05:54.300 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:54.300 TEST_HEADER include/spdk/gpt_spec.h 00:05:54.300 TEST_HEADER include/spdk/hexlify.h 00:05:54.300 TEST_HEADER include/spdk/histogram_data.h 00:05:54.300 TEST_HEADER include/spdk/idxd.h 00:05:54.300 TEST_HEADER include/spdk/idxd_spec.h 00:05:54.300 TEST_HEADER include/spdk/init.h 00:05:54.300 TEST_HEADER include/spdk/ioat.h 00:05:54.300 TEST_HEADER include/spdk/ioat_spec.h 00:05:54.300 TEST_HEADER include/spdk/iscsi_spec.h 00:05:54.300 TEST_HEADER include/spdk/json.h 00:05:54.300 LINK test_dma 00:05:54.300 TEST_HEADER include/spdk/jsonrpc.h 00:05:54.300 TEST_HEADER include/spdk/keyring.h 00:05:54.300 TEST_HEADER include/spdk/keyring_module.h 00:05:54.300 TEST_HEADER include/spdk/likely.h 00:05:54.300 TEST_HEADER include/spdk/log.h 00:05:54.300 LINK spdk_lspci 00:05:54.300 CC test/app/bdev_svc/bdev_svc.o 00:05:54.300 TEST_HEADER include/spdk/lvol.h 00:05:54.300 TEST_HEADER include/spdk/md5.h 00:05:54.300 CC test/env/vtophys/vtophys.o 00:05:54.300 TEST_HEADER include/spdk/memory.h 00:05:54.300 TEST_HEADER include/spdk/mmio.h 00:05:54.300 TEST_HEADER include/spdk/nbd.h 00:05:54.300 TEST_HEADER include/spdk/net.h 00:05:54.300 TEST_HEADER include/spdk/notify.h 00:05:54.300 TEST_HEADER include/spdk/nvme.h 00:05:54.300 TEST_HEADER include/spdk/nvme_intel.h 00:05:54.300 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:54.300 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:54.300 TEST_HEADER include/spdk/nvme_spec.h 00:05:54.300 TEST_HEADER include/spdk/nvme_zns.h 00:05:54.300 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:54.300 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:54.300 TEST_HEADER include/spdk/nvmf.h 00:05:54.300 TEST_HEADER include/spdk/nvmf_spec.h 00:05:54.300 TEST_HEADER include/spdk/nvmf_transport.h 00:05:54.300 TEST_HEADER include/spdk/opal.h 00:05:54.300 TEST_HEADER include/spdk/opal_spec.h 00:05:54.300 TEST_HEADER include/spdk/pci_ids.h 00:05:54.560 TEST_HEADER include/spdk/pipe.h 00:05:54.560 TEST_HEADER include/spdk/queue.h 00:05:54.560 TEST_HEADER include/spdk/reduce.h 00:05:54.560 TEST_HEADER include/spdk/rpc.h 00:05:54.560 TEST_HEADER include/spdk/scheduler.h 00:05:54.560 TEST_HEADER include/spdk/scsi.h 00:05:54.560 TEST_HEADER include/spdk/scsi_spec.h 00:05:54.560 TEST_HEADER include/spdk/sock.h 00:05:54.560 TEST_HEADER include/spdk/stdinc.h 00:05:54.560 TEST_HEADER include/spdk/string.h 00:05:54.560 TEST_HEADER include/spdk/thread.h 00:05:54.560 TEST_HEADER include/spdk/trace.h 00:05:54.560 TEST_HEADER include/spdk/trace_parser.h 00:05:54.560 CC test/env/mem_callbacks/mem_callbacks.o 00:05:54.560 TEST_HEADER include/spdk/tree.h 00:05:54.560 TEST_HEADER include/spdk/ublk.h 00:05:54.560 TEST_HEADER include/spdk/util.h 00:05:54.560 TEST_HEADER include/spdk/uuid.h 00:05:54.560 TEST_HEADER include/spdk/version.h 00:05:54.560 LINK verify 00:05:54.560 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:54.560 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:54.560 TEST_HEADER include/spdk/vhost.h 00:05:54.560 TEST_HEADER include/spdk/vmd.h 00:05:54.560 TEST_HEADER include/spdk/xor.h 00:05:54.560 TEST_HEADER include/spdk/zipf.h 00:05:54.560 CXX test/cpp_headers/accel.o 00:05:54.560 LINK spdk_nvme_discover 00:05:54.560 LINK bdev_svc 00:05:54.560 LINK vtophys 00:05:54.560 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:54.820 CC test/env/memory/memory_ut.o 00:05:54.820 CXX test/cpp_headers/accel_module.o 00:05:54.820 CXX test/cpp_headers/assert.o 00:05:54.820 LINK env_dpdk_post_init 00:05:54.820 CC test/env/pci/pci_ut.o 00:05:54.820 CC examples/thread/thread/thread_ex.o 00:05:54.820 CXX test/cpp_headers/barrier.o 00:05:54.820 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:55.079 CC test/app/histogram_perf/histogram_perf.o 00:05:55.079 CXX test/cpp_headers/base64.o 00:05:55.080 LINK mem_callbacks 00:05:55.080 LINK histogram_perf 00:05:55.080 LINK thread 00:05:55.080 CC test/app/jsoncat/jsoncat.o 00:05:55.080 CXX test/cpp_headers/bdev.o 00:05:55.080 LINK spdk_nvme_perf 00:05:55.338 CC test/app/stub/stub.o 00:05:55.338 LINK pci_ut 00:05:55.338 LINK jsoncat 00:05:55.338 LINK spdk_nvme_identify 00:05:55.338 CC app/spdk_top/spdk_top.o 00:05:55.338 CXX test/cpp_headers/bdev_module.o 00:05:55.338 LINK nvme_fuzz 00:05:55.338 LINK stub 00:05:55.338 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:55.597 CC examples/sock/hello_world/hello_sock.o 00:05:55.597 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:55.597 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:55.597 CXX test/cpp_headers/bdev_zone.o 00:05:55.597 CC app/vhost/vhost.o 00:05:55.597 CC examples/vmd/lsvmd/lsvmd.o 00:05:55.855 CC examples/vmd/led/led.o 00:05:55.855 LINK hello_sock 00:05:55.855 CXX test/cpp_headers/bit_array.o 00:05:55.855 LINK vhost 00:05:55.855 CC test/event/event_perf/event_perf.o 00:05:55.855 LINK memory_ut 00:05:55.855 LINK lsvmd 00:05:55.855 LINK led 00:05:55.855 CXX test/cpp_headers/bit_pool.o 00:05:55.855 LINK event_perf 00:05:56.114 LINK vhost_fuzz 00:05:56.114 CXX test/cpp_headers/blob_bdev.o 00:05:56.114 CC test/nvme/aer/aer.o 00:05:56.114 CC test/rpc_client/rpc_client_test.o 00:05:56.114 CXX test/cpp_headers/blobfs_bdev.o 00:05:56.114 CC test/event/reactor/reactor.o 00:05:56.114 CC test/event/reactor_perf/reactor_perf.o 00:05:56.373 CC examples/idxd/perf/perf.o 00:05:56.373 LINK rpc_client_test 00:05:56.373 CC test/accel/dif/dif.o 00:05:56.373 LINK spdk_top 00:05:56.374 LINK reactor 00:05:56.374 CXX test/cpp_headers/blobfs.o 00:05:56.374 LINK reactor_perf 00:05:56.374 CC test/nvme/reset/reset.o 00:05:56.374 LINK aer 00:05:56.374 CXX test/cpp_headers/blob.o 00:05:56.374 CXX test/cpp_headers/conf.o 00:05:56.632 CXX test/cpp_headers/config.o 00:05:56.632 CC test/event/app_repeat/app_repeat.o 00:05:56.632 CC app/spdk_dd/spdk_dd.o 00:05:56.632 CXX test/cpp_headers/cpuset.o 00:05:56.632 LINK idxd_perf 00:05:56.632 LINK reset 00:05:56.632 CC app/fio/nvme/fio_plugin.o 00:05:56.890 LINK app_repeat 00:05:56.890 CC test/event/scheduler/scheduler.o 00:05:56.890 CXX test/cpp_headers/crc16.o 00:05:56.890 CXX test/cpp_headers/crc32.o 00:05:56.890 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:56.890 CC test/nvme/sgl/sgl.o 00:05:56.890 CXX test/cpp_headers/crc64.o 00:05:56.890 LINK spdk_dd 00:05:57.148 LINK dif 00:05:57.148 LINK scheduler 00:05:57.148 CXX test/cpp_headers/dif.o 00:05:57.148 CC examples/accel/perf/accel_perf.o 00:05:57.148 LINK hello_fsdev 00:05:57.148 LINK sgl 00:05:57.148 CC examples/blob/hello_world/hello_blob.o 00:05:57.148 CXX test/cpp_headers/dma.o 00:05:57.407 CC test/nvme/e2edp/nvme_dp.o 00:05:57.407 LINK spdk_nvme 00:05:57.407 CC test/nvme/overhead/overhead.o 00:05:57.407 CC test/nvme/err_injection/err_injection.o 00:05:57.407 LINK iscsi_fuzz 00:05:57.407 CXX test/cpp_headers/endian.o 00:05:57.407 LINK hello_blob 00:05:57.407 CC test/nvme/startup/startup.o 00:05:57.407 CC app/fio/bdev/fio_plugin.o 00:05:57.407 CC test/nvme/reserve/reserve.o 00:05:57.407 CXX test/cpp_headers/env_dpdk.o 00:05:57.407 LINK err_injection 00:05:57.666 CXX test/cpp_headers/env.o 00:05:57.666 LINK nvme_dp 00:05:57.666 LINK overhead 00:05:57.666 LINK startup 00:05:57.666 LINK accel_perf 00:05:57.666 CXX test/cpp_headers/event.o 00:05:57.666 LINK reserve 00:05:57.666 CC examples/blob/cli/blobcli.o 00:05:57.924 CC test/nvme/simple_copy/simple_copy.o 00:05:57.924 CXX test/cpp_headers/fd_group.o 00:05:57.924 CC examples/nvme/hello_world/hello_world.o 00:05:57.924 CXX test/cpp_headers/fd.o 00:05:57.924 CC examples/nvme/reconnect/reconnect.o 00:05:57.924 CC test/blobfs/mkfs/mkfs.o 00:05:57.924 LINK spdk_bdev 00:05:57.924 CXX test/cpp_headers/file.o 00:05:58.182 LINK simple_copy 00:05:58.182 CC test/bdev/bdevio/bdevio.o 00:05:58.182 CC test/lvol/esnap/esnap.o 00:05:58.182 LINK hello_world 00:05:58.182 LINK mkfs 00:05:58.182 CXX test/cpp_headers/fsdev.o 00:05:58.182 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:58.182 CC examples/bdev/hello_world/hello_bdev.o 00:05:58.183 CXX test/cpp_headers/fsdev_module.o 00:05:58.183 LINK reconnect 00:05:58.183 LINK blobcli 00:05:58.441 CC test/nvme/connect_stress/connect_stress.o 00:05:58.441 CXX test/cpp_headers/ftl.o 00:05:58.441 CC test/nvme/boot_partition/boot_partition.o 00:05:58.441 LINK bdevio 00:05:58.441 CXX test/cpp_headers/fuse_dispatcher.o 00:05:58.441 LINK hello_bdev 00:05:58.441 LINK connect_stress 00:05:58.441 CC examples/nvme/arbitration/arbitration.o 00:05:58.699 CC examples/bdev/bdevperf/bdevperf.o 00:05:58.699 CC examples/nvme/hotplug/hotplug.o 00:05:58.699 LINK boot_partition 00:05:58.699 CXX test/cpp_headers/gpt_spec.o 00:05:58.699 CXX test/cpp_headers/hexlify.o 00:05:58.699 CC test/nvme/compliance/nvme_compliance.o 00:05:58.699 LINK nvme_manage 00:05:58.699 CC test/nvme/fused_ordering/fused_ordering.o 00:05:58.699 CXX test/cpp_headers/histogram_data.o 00:05:58.958 LINK arbitration 00:05:58.958 LINK hotplug 00:05:58.958 CC examples/nvme/abort/abort.o 00:05:58.958 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:58.958 CXX test/cpp_headers/idxd.o 00:05:58.958 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:58.958 LINK fused_ordering 00:05:59.216 LINK nvme_compliance 00:05:59.216 CC test/nvme/fdp/fdp.o 00:05:59.216 LINK cmb_copy 00:05:59.216 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:59.216 CXX test/cpp_headers/idxd_spec.o 00:05:59.216 LINK doorbell_aers 00:05:59.216 CC test/nvme/cuse/cuse.o 00:05:59.216 CXX test/cpp_headers/init.o 00:05:59.216 CXX test/cpp_headers/ioat.o 00:05:59.216 CXX test/cpp_headers/ioat_spec.o 00:05:59.216 LINK pmr_persistence 00:05:59.216 LINK abort 00:05:59.474 CXX test/cpp_headers/iscsi_spec.o 00:05:59.474 CXX test/cpp_headers/json.o 00:05:59.475 CXX test/cpp_headers/jsonrpc.o 00:05:59.475 CXX test/cpp_headers/keyring.o 00:05:59.475 LINK bdevperf 00:05:59.475 CXX test/cpp_headers/keyring_module.o 00:05:59.475 LINK fdp 00:05:59.475 CXX test/cpp_headers/likely.o 00:05:59.475 CXX test/cpp_headers/log.o 00:05:59.475 CXX test/cpp_headers/lvol.o 00:05:59.475 CXX test/cpp_headers/md5.o 00:05:59.475 CXX test/cpp_headers/memory.o 00:05:59.733 CXX test/cpp_headers/mmio.o 00:05:59.733 CXX test/cpp_headers/nbd.o 00:05:59.733 CXX test/cpp_headers/net.o 00:05:59.733 CXX test/cpp_headers/notify.o 00:05:59.734 CXX test/cpp_headers/nvme.o 00:05:59.734 CXX test/cpp_headers/nvme_intel.o 00:05:59.734 CXX test/cpp_headers/nvme_ocssd.o 00:05:59.734 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:59.734 CXX test/cpp_headers/nvme_spec.o 00:05:59.734 CXX test/cpp_headers/nvme_zns.o 00:05:59.734 CXX test/cpp_headers/nvmf_cmd.o 00:05:59.734 CC examples/nvmf/nvmf/nvmf.o 00:05:59.992 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:59.992 CXX test/cpp_headers/nvmf.o 00:05:59.992 CXX test/cpp_headers/nvmf_spec.o 00:05:59.992 CXX test/cpp_headers/nvmf_transport.o 00:05:59.992 CXX test/cpp_headers/opal.o 00:05:59.992 CXX test/cpp_headers/opal_spec.o 00:05:59.992 CXX test/cpp_headers/pci_ids.o 00:05:59.992 CXX test/cpp_headers/pipe.o 00:05:59.992 CXX test/cpp_headers/queue.o 00:05:59.992 CXX test/cpp_headers/reduce.o 00:05:59.992 CXX test/cpp_headers/rpc.o 00:05:59.992 CXX test/cpp_headers/scheduler.o 00:05:59.992 CXX test/cpp_headers/scsi.o 00:06:00.252 CXX test/cpp_headers/scsi_spec.o 00:06:00.252 LINK nvmf 00:06:00.252 CXX test/cpp_headers/sock.o 00:06:00.252 CXX test/cpp_headers/stdinc.o 00:06:00.252 CXX test/cpp_headers/string.o 00:06:00.252 CXX test/cpp_headers/thread.o 00:06:00.252 CXX test/cpp_headers/trace.o 00:06:00.252 CXX test/cpp_headers/trace_parser.o 00:06:00.252 CXX test/cpp_headers/tree.o 00:06:00.252 CXX test/cpp_headers/ublk.o 00:06:00.252 CXX test/cpp_headers/util.o 00:06:00.252 CXX test/cpp_headers/uuid.o 00:06:00.252 CXX test/cpp_headers/version.o 00:06:00.510 CXX test/cpp_headers/vfio_user_pci.o 00:06:00.510 CXX test/cpp_headers/vfio_user_spec.o 00:06:00.510 CXX test/cpp_headers/vhost.o 00:06:00.510 CXX test/cpp_headers/vmd.o 00:06:00.510 CXX test/cpp_headers/xor.o 00:06:00.510 CXX test/cpp_headers/zipf.o 00:06:00.510 LINK cuse 00:06:03.830 LINK esnap 00:06:04.440 00:06:04.440 real 1m22.305s 00:06:04.440 user 7m3.670s 00:06:04.440 sys 1m54.159s 00:06:04.440 ************************************ 00:06:04.440 END TEST make 00:06:04.440 ************************************ 00:06:04.440 12:16:27 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:04.440 12:16:27 make -- common/autotest_common.sh@10 -- $ set +x 00:06:04.440 12:16:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:04.440 12:16:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:04.440 12:16:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:04.440 12:16:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:04.440 12:16:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:04.440 12:16:27 -- pm/common@44 -- $ pid=5272 00:06:04.440 12:16:27 -- pm/common@50 -- $ kill -TERM 5272 00:06:04.440 12:16:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:04.440 12:16:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:04.440 12:16:27 -- pm/common@44 -- $ pid=5274 00:06:04.440 12:16:27 -- pm/common@50 -- $ kill -TERM 5274 00:06:04.440 12:16:27 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.440 12:16:27 -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.440 12:16:27 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.700 12:16:27 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.700 12:16:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.700 12:16:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.700 12:16:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.700 12:16:27 -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.700 12:16:27 -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.700 12:16:27 -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.700 12:16:27 -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.700 12:16:27 -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.700 12:16:27 -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.700 12:16:27 -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.700 12:16:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.700 12:16:27 -- scripts/common.sh@344 -- # case "$op" in 00:06:04.700 12:16:27 -- scripts/common.sh@345 -- # : 1 00:06:04.700 12:16:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.700 12:16:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.700 12:16:27 -- scripts/common.sh@365 -- # decimal 1 00:06:04.700 12:16:27 -- scripts/common.sh@353 -- # local d=1 00:06:04.700 12:16:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.700 12:16:27 -- scripts/common.sh@355 -- # echo 1 00:06:04.700 12:16:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.700 12:16:27 -- scripts/common.sh@366 -- # decimal 2 00:06:04.700 12:16:27 -- scripts/common.sh@353 -- # local d=2 00:06:04.700 12:16:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.700 12:16:27 -- scripts/common.sh@355 -- # echo 2 00:06:04.700 12:16:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.700 12:16:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.700 12:16:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.700 12:16:27 -- scripts/common.sh@368 -- # return 0 00:06:04.700 12:16:27 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.700 12:16:27 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.700 --rc genhtml_branch_coverage=1 00:06:04.700 --rc genhtml_function_coverage=1 00:06:04.700 --rc genhtml_legend=1 00:06:04.700 --rc geninfo_all_blocks=1 00:06:04.700 --rc geninfo_unexecuted_blocks=1 00:06:04.700 00:06:04.700 ' 00:06:04.700 12:16:27 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.700 --rc genhtml_branch_coverage=1 00:06:04.700 --rc genhtml_function_coverage=1 00:06:04.700 --rc genhtml_legend=1 00:06:04.700 --rc geninfo_all_blocks=1 00:06:04.700 --rc geninfo_unexecuted_blocks=1 00:06:04.700 00:06:04.700 ' 00:06:04.700 12:16:27 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.700 --rc genhtml_branch_coverage=1 00:06:04.700 --rc genhtml_function_coverage=1 00:06:04.700 --rc genhtml_legend=1 00:06:04.700 --rc geninfo_all_blocks=1 00:06:04.700 --rc geninfo_unexecuted_blocks=1 00:06:04.700 00:06:04.700 ' 00:06:04.700 12:16:27 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.700 --rc genhtml_branch_coverage=1 00:06:04.700 --rc genhtml_function_coverage=1 00:06:04.700 --rc genhtml_legend=1 00:06:04.700 --rc geninfo_all_blocks=1 00:06:04.700 --rc geninfo_unexecuted_blocks=1 00:06:04.700 00:06:04.700 ' 00:06:04.700 12:16:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:04.700 12:16:27 -- nvmf/common.sh@7 -- # uname -s 00:06:04.700 12:16:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.700 12:16:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.700 12:16:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.700 12:16:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.700 12:16:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.700 12:16:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.700 12:16:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.700 12:16:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.700 12:16:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.700 12:16:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.700 12:16:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4aa3afb5-543d-48d0-900a-0624ed4cc47b 00:06:04.700 12:16:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=4aa3afb5-543d-48d0-900a-0624ed4cc47b 00:06:04.700 12:16:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.700 12:16:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.700 12:16:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.700 12:16:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.700 12:16:27 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:04.700 12:16:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.700 12:16:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.700 12:16:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.700 12:16:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.700 12:16:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.700 12:16:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.700 12:16:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.700 12:16:27 -- paths/export.sh@5 -- # export PATH 00:06:04.700 12:16:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.700 12:16:27 -- nvmf/common.sh@51 -- # : 0 00:06:04.700 12:16:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.700 12:16:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.700 12:16:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.700 12:16:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.700 12:16:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.700 12:16:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.700 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.700 12:16:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.700 12:16:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.700 12:16:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.700 12:16:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:04.700 12:16:27 -- spdk/autotest.sh@32 -- # uname -s 00:06:04.700 12:16:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:04.700 12:16:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:04.700 12:16:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:04.700 12:16:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:04.700 12:16:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:04.701 12:16:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:04.701 12:16:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:04.701 12:16:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:04.701 12:16:27 -- spdk/autotest.sh@48 -- # udevadm_pid=55100 00:06:04.701 12:16:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:04.701 12:16:27 -- pm/common@17 -- # local monitor 00:06:04.701 12:16:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:04.701 12:16:27 -- pm/common@21 -- # date +%s 00:06:04.701 12:16:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:04.701 12:16:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:04.701 12:16:27 -- pm/common@25 -- # sleep 1 00:06:04.701 12:16:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728303387 00:06:04.701 12:16:27 -- pm/common@21 -- # date +%s 00:06:04.701 12:16:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728303387 00:06:04.701 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728303387_collect-cpu-load.pm.log 00:06:04.701 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728303387_collect-vmstat.pm.log 00:06:06.077 12:16:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:06.077 12:16:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:06.077 12:16:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.077 12:16:28 -- common/autotest_common.sh@10 -- # set +x 00:06:06.077 12:16:28 -- spdk/autotest.sh@59 -- # create_test_list 00:06:06.077 12:16:28 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:06.077 12:16:28 -- common/autotest_common.sh@10 -- # set +x 00:06:06.077 12:16:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:06.077 12:16:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:06.077 12:16:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:06.077 12:16:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:06.077 12:16:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:06.077 12:16:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:06.077 12:16:29 -- common/autotest_common.sh@1455 -- # uname 00:06:06.077 12:16:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:06.077 12:16:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:06.077 12:16:29 -- common/autotest_common.sh@1475 -- # uname 00:06:06.077 12:16:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:06.077 12:16:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:06.077 12:16:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:06.077 lcov: LCOV version 1.15 00:06:06.077 12:16:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:20.993 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:20.993 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:35.888 12:16:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:35.888 12:16:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:35.888 12:16:58 -- common/autotest_common.sh@10 -- # set +x 00:06:35.888 12:16:58 -- spdk/autotest.sh@78 -- # rm -f 00:06:35.888 12:16:58 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:36.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:37.390 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:37.390 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:37.390 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:37.390 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:37.390 12:17:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:37.390 12:17:00 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:37.390 12:17:00 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:37.390 12:17:00 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:37.390 12:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:37.390 12:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:37.390 12:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:37.390 12:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:37.390 12:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:37.390 12:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:37.390 12:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:37.390 12:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:37.390 12:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:37.390 12:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:37.390 12:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:37.390 12:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:37.390 12:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:37.390 12:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:37.390 12:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:37.390 12:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:37.390 12:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:37.391 12:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:37.391 12:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:37.391 12:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:37.391 12:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:37.391 12:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:37.391 12:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:37.391 12:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:37.391 12:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:37.391 12:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:37.391 12:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:37.391 12:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:37.391 12:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:37.391 12:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:37.391 12:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:37.391 12:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:37.391 12:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:37.391 12:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:37.391 12:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:37.391 12:17:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:37.391 12:17:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:37.391 12:17:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:37.391 12:17:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:37.391 12:17:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:37.391 12:17:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:37.391 No valid GPT data, bailing 00:06:37.391 12:17:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:37.391 12:17:00 -- scripts/common.sh@394 -- # pt= 00:06:37.391 12:17:00 -- scripts/common.sh@395 -- # return 1 00:06:37.391 12:17:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:37.391 1+0 records in 00:06:37.391 1+0 records out 00:06:37.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161475 s, 64.9 MB/s 00:06:37.391 12:17:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:37.391 12:17:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:37.391 12:17:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:37.391 12:17:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:37.391 12:17:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:37.391 No valid GPT data, bailing 00:06:37.391 12:17:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:37.391 12:17:00 -- scripts/common.sh@394 -- # pt= 00:06:37.391 12:17:00 -- scripts/common.sh@395 -- # return 1 00:06:37.391 12:17:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:37.391 1+0 records in 00:06:37.391 1+0 records out 00:06:37.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059667 s, 176 MB/s 00:06:37.391 12:17:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:37.391 12:17:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:37.391 12:17:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:06:37.391 12:17:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:06:37.391 12:17:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:37.649 No valid GPT data, bailing 00:06:37.649 12:17:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:37.649 12:17:00 -- scripts/common.sh@394 -- # pt= 00:06:37.649 12:17:00 -- scripts/common.sh@395 -- # return 1 00:06:37.649 12:17:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:37.649 1+0 records in 00:06:37.649 1+0 records out 00:06:37.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00632594 s, 166 MB/s 00:06:37.649 12:17:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:37.649 12:17:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:37.649 12:17:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:06:37.649 12:17:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:06:37.649 12:17:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:06:37.649 No valid GPT data, bailing 00:06:37.649 12:17:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:37.649 12:17:00 -- scripts/common.sh@394 -- # pt= 00:06:37.649 12:17:00 -- scripts/common.sh@395 -- # return 1 00:06:37.649 12:17:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:06:37.649 1+0 records in 00:06:37.649 1+0 records out 00:06:37.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618956 s, 169 MB/s 00:06:37.649 12:17:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:37.649 12:17:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:37.649 12:17:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:06:37.649 12:17:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:06:37.649 12:17:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:06:37.649 No valid GPT data, bailing 00:06:37.649 12:17:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:37.649 12:17:00 -- scripts/common.sh@394 -- # pt= 00:06:37.649 12:17:00 -- scripts/common.sh@395 -- # return 1 00:06:37.649 12:17:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:06:37.649 1+0 records in 00:06:37.649 1+0 records out 00:06:37.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456416 s, 230 MB/s 00:06:37.649 12:17:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:37.649 12:17:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:37.649 12:17:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:06:37.649 12:17:00 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:06:37.649 12:17:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:37.907 No valid GPT data, bailing 00:06:37.907 12:17:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:37.907 12:17:00 -- scripts/common.sh@394 -- # pt= 00:06:37.907 12:17:00 -- scripts/common.sh@395 -- # return 1 00:06:37.907 12:17:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:37.907 1+0 records in 00:06:37.907 1+0 records out 00:06:37.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00544129 s, 193 MB/s 00:06:37.907 12:17:00 -- spdk/autotest.sh@105 -- # sync 00:06:37.907 12:17:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:37.907 12:17:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:37.907 12:17:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:41.188 12:17:03 -- spdk/autotest.sh@111 -- # uname -s 00:06:41.188 12:17:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:41.188 12:17:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:41.188 12:17:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:41.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.011 Hugepages 00:06:42.011 node hugesize free / total 00:06:42.011 node0 1048576kB 0 / 0 00:06:42.011 node0 2048kB 0 / 0 00:06:42.011 00:06:42.011 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:42.270 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:42.270 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:42.534 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:42.534 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:42.792 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:42.792 12:17:05 -- spdk/autotest.sh@117 -- # uname -s 00:06:42.792 12:17:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:42.792 12:17:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:42.792 12:17:05 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:43.379 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.961 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.221 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.221 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.221 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.221 12:17:07 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:45.598 12:17:08 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:45.598 12:17:08 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:45.598 12:17:08 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:45.598 12:17:08 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:45.598 12:17:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:45.598 12:17:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:45.598 12:17:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:45.598 12:17:08 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:45.598 12:17:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:45.599 12:17:08 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:06:45.599 12:17:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:45.599 12:17:08 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:45.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.115 Waiting for block devices as requested 00:06:46.374 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.374 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.632 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.632 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:51.905 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:51.905 12:17:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:51.905 12:17:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:51.905 12:17:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:51.905 12:17:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:51.905 12:17:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:51.905 12:17:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:51.905 12:17:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:51.905 12:17:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:51.905 12:17:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:51.905 12:17:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:51.905 12:17:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:51.905 12:17:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:51.905 12:17:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:51.905 12:17:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:51.905 12:17:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:51.905 12:17:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:51.905 12:17:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:51.905 12:17:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:51.905 12:17:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:51.905 12:17:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:51.905 12:17:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:51.905 12:17:14 -- common/autotest_common.sh@1541 -- # continue 00:06:51.905 12:17:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:51.905 12:17:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:51.905 12:17:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:51.905 12:17:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:51.905 12:17:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:51.905 12:17:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:51.905 12:17:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:51.905 12:17:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:51.905 12:17:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:51.905 12:17:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:51.905 12:17:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:51.905 12:17:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:51.905 12:17:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:51.905 12:17:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:51.905 12:17:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:51.905 12:17:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:51.905 12:17:15 -- common/autotest_common.sh@1541 -- # continue 00:06:51.905 12:17:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:51.905 12:17:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:51.905 12:17:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:51.905 12:17:15 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:06:51.905 12:17:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:51.905 12:17:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:06:51.905 12:17:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:51.905 12:17:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:51.905 12:17:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:51.905 12:17:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:51.905 12:17:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:51.905 12:17:15 -- common/autotest_common.sh@1541 -- # continue 00:06:51.905 12:17:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:51.905 12:17:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:51.905 12:17:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:51.905 12:17:15 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:06:51.905 12:17:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:51.905 12:17:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:51.905 12:17:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:51.905 12:17:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:06:51.905 12:17:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:06:51.905 12:17:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:06:51.905 12:17:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:06:51.905 12:17:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:51.905 12:17:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:51.905 12:17:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:51.905 12:17:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:51.905 12:17:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:51.905 12:17:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:51.905 12:17:15 -- common/autotest_common.sh@1541 -- # continue 00:06:51.905 12:17:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:51.905 12:17:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:51.905 12:17:15 -- common/autotest_common.sh@10 -- # set +x 00:06:52.165 12:17:15 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:52.165 12:17:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:52.165 12:17:15 -- common/autotest_common.sh@10 -- # set +x 00:06:52.165 12:17:15 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:52.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:53.670 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.670 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.670 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.670 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.670 12:17:16 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:53.670 12:17:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.670 12:17:16 -- common/autotest_common.sh@10 -- # set +x 00:06:53.670 12:17:16 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:53.670 12:17:16 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:53.670 12:17:16 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:53.670 12:17:16 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:53.670 12:17:16 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:53.670 12:17:16 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:53.670 12:17:16 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:53.670 12:17:16 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:53.670 12:17:16 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:53.670 12:17:16 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:53.670 12:17:16 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:53.670 12:17:16 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:53.670 12:17:16 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:53.930 12:17:17 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:06:53.930 12:17:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:53.930 12:17:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:53.930 12:17:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:53.930 12:17:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:53.930 12:17:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:53.930 12:17:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:53.930 12:17:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:53.930 12:17:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:53.930 12:17:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:53.930 12:17:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:53.930 12:17:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:53.930 12:17:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:53.930 12:17:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:53.930 12:17:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:53.930 12:17:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:53.930 12:17:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:53.930 12:17:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:53.930 12:17:17 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:53.930 12:17:17 -- common/autotest_common.sh@1570 -- # return 0 00:06:53.930 12:17:17 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:53.930 12:17:17 -- common/autotest_common.sh@1578 -- # return 0 00:06:53.930 12:17:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:53.930 12:17:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:53.930 12:17:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:53.930 12:17:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:53.930 12:17:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:53.930 12:17:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.930 12:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:53.930 12:17:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:53.930 12:17:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:53.930 12:17:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.930 12:17:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.930 12:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:53.930 ************************************ 00:06:53.930 START TEST env 00:06:53.930 ************************************ 00:06:53.930 12:17:17 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:53.930 * Looking for test storage... 00:06:53.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:53.930 12:17:17 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.930 12:17:17 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.930 12:17:17 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.189 12:17:17 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.190 12:17:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.190 12:17:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.190 12:17:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.190 12:17:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.190 12:17:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.190 12:17:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.190 12:17:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.190 12:17:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.190 12:17:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.190 12:17:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.190 12:17:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.190 12:17:17 env -- scripts/common.sh@344 -- # case "$op" in 00:06:54.190 12:17:17 env -- scripts/common.sh@345 -- # : 1 00:06:54.190 12:17:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.190 12:17:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.190 12:17:17 env -- scripts/common.sh@365 -- # decimal 1 00:06:54.190 12:17:17 env -- scripts/common.sh@353 -- # local d=1 00:06:54.190 12:17:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.190 12:17:17 env -- scripts/common.sh@355 -- # echo 1 00:06:54.190 12:17:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.190 12:17:17 env -- scripts/common.sh@366 -- # decimal 2 00:06:54.190 12:17:17 env -- scripts/common.sh@353 -- # local d=2 00:06:54.190 12:17:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.190 12:17:17 env -- scripts/common.sh@355 -- # echo 2 00:06:54.190 12:17:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.190 12:17:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.190 12:17:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.190 12:17:17 env -- scripts/common.sh@368 -- # return 0 00:06:54.190 12:17:17 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.190 12:17:17 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.190 --rc genhtml_branch_coverage=1 00:06:54.190 --rc genhtml_function_coverage=1 00:06:54.190 --rc genhtml_legend=1 00:06:54.190 --rc geninfo_all_blocks=1 00:06:54.190 --rc geninfo_unexecuted_blocks=1 00:06:54.190 00:06:54.190 ' 00:06:54.190 12:17:17 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.190 --rc genhtml_branch_coverage=1 00:06:54.190 --rc genhtml_function_coverage=1 00:06:54.190 --rc genhtml_legend=1 00:06:54.190 --rc geninfo_all_blocks=1 00:06:54.190 --rc geninfo_unexecuted_blocks=1 00:06:54.190 00:06:54.190 ' 00:06:54.190 12:17:17 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.190 --rc genhtml_branch_coverage=1 00:06:54.190 --rc genhtml_function_coverage=1 00:06:54.190 --rc genhtml_legend=1 00:06:54.190 --rc geninfo_all_blocks=1 00:06:54.190 --rc geninfo_unexecuted_blocks=1 00:06:54.190 00:06:54.190 ' 00:06:54.190 12:17:17 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.190 --rc genhtml_branch_coverage=1 00:06:54.190 --rc genhtml_function_coverage=1 00:06:54.190 --rc genhtml_legend=1 00:06:54.190 --rc geninfo_all_blocks=1 00:06:54.190 --rc geninfo_unexecuted_blocks=1 00:06:54.190 00:06:54.190 ' 00:06:54.190 12:17:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:54.190 12:17:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.190 12:17:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.190 12:17:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:54.190 ************************************ 00:06:54.190 START TEST env_memory 00:06:54.190 ************************************ 00:06:54.190 12:17:17 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:54.190 00:06:54.190 00:06:54.190 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.190 http://cunit.sourceforge.net/ 00:06:54.190 00:06:54.190 00:06:54.190 Suite: memory 00:06:54.190 Test: alloc and free memory map ...[2024-10-07 12:17:17.382233] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:54.190 passed 00:06:54.190 Test: mem map translation ...[2024-10-07 12:17:17.426564] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:54.190 [2024-10-07 12:17:17.426610] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:54.190 [2024-10-07 12:17:17.426677] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:54.190 [2024-10-07 12:17:17.426714] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:54.449 passed 00:06:54.449 Test: mem map registration ...[2024-10-07 12:17:17.494828] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:54.449 [2024-10-07 12:17:17.494873] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:54.449 passed 00:06:54.449 Test: mem map adjacent registrations ...passed 00:06:54.449 00:06:54.449 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.449 suites 1 1 n/a 0 0 00:06:54.449 tests 4 4 4 0 0 00:06:54.449 asserts 152 152 152 0 n/a 00:06:54.449 00:06:54.449 Elapsed time = 0.241 seconds 00:06:54.449 00:06:54.449 real 0m0.296s 00:06:54.449 user 0m0.252s 00:06:54.449 sys 0m0.035s 00:06:54.449 12:17:17 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.449 12:17:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:54.449 ************************************ 00:06:54.449 END TEST env_memory 00:06:54.449 ************************************ 00:06:54.449 12:17:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:54.449 12:17:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.449 12:17:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.449 12:17:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:54.449 ************************************ 00:06:54.449 START TEST env_vtophys 00:06:54.449 ************************************ 00:06:54.449 12:17:17 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:54.449 EAL: lib.eal log level changed from notice to debug 00:06:54.449 EAL: Detected lcore 0 as core 0 on socket 0 00:06:54.449 EAL: Detected lcore 1 as core 0 on socket 0 00:06:54.449 EAL: Detected lcore 2 as core 0 on socket 0 00:06:54.449 EAL: Detected lcore 3 as core 0 on socket 0 00:06:54.449 EAL: Detected lcore 4 as core 0 on socket 0 00:06:54.449 EAL: Detected lcore 5 as core 0 on socket 0 00:06:54.449 EAL: Detected lcore 6 as core 0 on socket 0 00:06:54.449 EAL: Detected lcore 7 as core 0 on socket 0 00:06:54.449 EAL: Detected lcore 8 as core 0 on socket 0 00:06:54.449 EAL: Detected lcore 9 as core 0 on socket 0 00:06:54.708 EAL: Maximum logical cores by configuration: 128 00:06:54.708 EAL: Detected CPU lcores: 10 00:06:54.708 EAL: Detected NUMA nodes: 1 00:06:54.708 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:54.708 EAL: Detected shared linkage of DPDK 00:06:54.708 EAL: No shared files mode enabled, IPC will be disabled 00:06:54.708 EAL: Selected IOVA mode 'PA' 00:06:54.708 EAL: Probing VFIO support... 00:06:54.708 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:54.708 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:54.708 EAL: Ask a virtual area of 0x2e000 bytes 00:06:54.708 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:54.708 EAL: Setting up physically contiguous memory... 00:06:54.708 EAL: Setting maximum number of open files to 524288 00:06:54.708 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:54.708 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:54.708 EAL: Ask a virtual area of 0x61000 bytes 00:06:54.708 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:54.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:54.709 EAL: Ask a virtual area of 0x400000000 bytes 00:06:54.709 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:54.709 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:54.709 EAL: Ask a virtual area of 0x61000 bytes 00:06:54.709 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:54.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:54.709 EAL: Ask a virtual area of 0x400000000 bytes 00:06:54.709 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:54.709 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:54.709 EAL: Ask a virtual area of 0x61000 bytes 00:06:54.709 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:54.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:54.709 EAL: Ask a virtual area of 0x400000000 bytes 00:06:54.709 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:54.709 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:54.709 EAL: Ask a virtual area of 0x61000 bytes 00:06:54.709 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:54.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:54.709 EAL: Ask a virtual area of 0x400000000 bytes 00:06:54.709 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:54.709 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:54.709 EAL: Hugepages will be freed exactly as allocated. 00:06:54.709 EAL: No shared files mode enabled, IPC is disabled 00:06:54.709 EAL: No shared files mode enabled, IPC is disabled 00:06:54.709 EAL: TSC frequency is ~2490000 KHz 00:06:54.709 EAL: Main lcore 0 is ready (tid=7faa937d7a40;cpuset=[0]) 00:06:54.709 EAL: Trying to obtain current memory policy. 00:06:54.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:54.709 EAL: Restoring previous memory policy: 0 00:06:54.709 EAL: request: mp_malloc_sync 00:06:54.709 EAL: No shared files mode enabled, IPC is disabled 00:06:54.709 EAL: Heap on socket 0 was expanded by 2MB 00:06:54.709 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:54.709 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:54.709 EAL: Mem event callback 'spdk:(nil)' registered 00:06:54.709 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:54.709 00:06:54.709 00:06:54.709 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.709 http://cunit.sourceforge.net/ 00:06:54.709 00:06:54.709 00:06:54.709 Suite: components_suite 00:06:55.276 Test: vtophys_malloc_test ...passed 00:06:55.276 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:55.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.276 EAL: Restoring previous memory policy: 4 00:06:55.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.276 EAL: request: mp_malloc_sync 00:06:55.276 EAL: No shared files mode enabled, IPC is disabled 00:06:55.276 EAL: Heap on socket 0 was expanded by 4MB 00:06:55.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.276 EAL: request: mp_malloc_sync 00:06:55.276 EAL: No shared files mode enabled, IPC is disabled 00:06:55.276 EAL: Heap on socket 0 was shrunk by 4MB 00:06:55.276 EAL: Trying to obtain current memory policy. 00:06:55.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.276 EAL: Restoring previous memory policy: 4 00:06:55.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.276 EAL: request: mp_malloc_sync 00:06:55.276 EAL: No shared files mode enabled, IPC is disabled 00:06:55.276 EAL: Heap on socket 0 was expanded by 6MB 00:06:55.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.276 EAL: request: mp_malloc_sync 00:06:55.276 EAL: No shared files mode enabled, IPC is disabled 00:06:55.276 EAL: Heap on socket 0 was shrunk by 6MB 00:06:55.276 EAL: Trying to obtain current memory policy. 00:06:55.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.276 EAL: Restoring previous memory policy: 4 00:06:55.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.276 EAL: request: mp_malloc_sync 00:06:55.276 EAL: No shared files mode enabled, IPC is disabled 00:06:55.276 EAL: Heap on socket 0 was expanded by 10MB 00:06:55.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.276 EAL: request: mp_malloc_sync 00:06:55.276 EAL: No shared files mode enabled, IPC is disabled 00:06:55.276 EAL: Heap on socket 0 was shrunk by 10MB 00:06:55.276 EAL: Trying to obtain current memory policy. 00:06:55.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.276 EAL: Restoring previous memory policy: 4 00:06:55.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.276 EAL: request: mp_malloc_sync 00:06:55.276 EAL: No shared files mode enabled, IPC is disabled 00:06:55.276 EAL: Heap on socket 0 was expanded by 18MB 00:06:55.276 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.276 EAL: request: mp_malloc_sync 00:06:55.276 EAL: No shared files mode enabled, IPC is disabled 00:06:55.276 EAL: Heap on socket 0 was shrunk by 18MB 00:06:55.534 EAL: Trying to obtain current memory policy. 00:06:55.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.534 EAL: Restoring previous memory policy: 4 00:06:55.534 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.534 EAL: request: mp_malloc_sync 00:06:55.534 EAL: No shared files mode enabled, IPC is disabled 00:06:55.534 EAL: Heap on socket 0 was expanded by 34MB 00:06:55.534 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.534 EAL: request: mp_malloc_sync 00:06:55.534 EAL: No shared files mode enabled, IPC is disabled 00:06:55.534 EAL: Heap on socket 0 was shrunk by 34MB 00:06:55.534 EAL: Trying to obtain current memory policy. 00:06:55.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.534 EAL: Restoring previous memory policy: 4 00:06:55.534 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.534 EAL: request: mp_malloc_sync 00:06:55.534 EAL: No shared files mode enabled, IPC is disabled 00:06:55.534 EAL: Heap on socket 0 was expanded by 66MB 00:06:55.793 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.793 EAL: request: mp_malloc_sync 00:06:55.793 EAL: No shared files mode enabled, IPC is disabled 00:06:55.793 EAL: Heap on socket 0 was shrunk by 66MB 00:06:55.793 EAL: Trying to obtain current memory policy. 00:06:55.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:55.793 EAL: Restoring previous memory policy: 4 00:06:55.793 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.793 EAL: request: mp_malloc_sync 00:06:55.793 EAL: No shared files mode enabled, IPC is disabled 00:06:55.793 EAL: Heap on socket 0 was expanded by 130MB 00:06:56.052 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.052 EAL: request: mp_malloc_sync 00:06:56.052 EAL: No shared files mode enabled, IPC is disabled 00:06:56.052 EAL: Heap on socket 0 was shrunk by 130MB 00:06:56.311 EAL: Trying to obtain current memory policy. 00:06:56.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:56.570 EAL: Restoring previous memory policy: 4 00:06:56.570 EAL: Calling mem event callback 'spdk:(nil)' 00:06:56.570 EAL: request: mp_malloc_sync 00:06:56.570 EAL: No shared files mode enabled, IPC is disabled 00:06:56.570 EAL: Heap on socket 0 was expanded by 258MB 00:06:56.829 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.088 EAL: request: mp_malloc_sync 00:06:57.088 EAL: No shared files mode enabled, IPC is disabled 00:06:57.088 EAL: Heap on socket 0 was shrunk by 258MB 00:06:57.347 EAL: Trying to obtain current memory policy. 00:06:57.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.606 EAL: Restoring previous memory policy: 4 00:06:57.606 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.606 EAL: request: mp_malloc_sync 00:06:57.606 EAL: No shared files mode enabled, IPC is disabled 00:06:57.606 EAL: Heap on socket 0 was expanded by 514MB 00:06:58.546 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.807 EAL: request: mp_malloc_sync 00:06:58.807 EAL: No shared files mode enabled, IPC is disabled 00:06:58.807 EAL: Heap on socket 0 was shrunk by 514MB 00:06:59.744 EAL: Trying to obtain current memory policy. 00:06:59.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.003 EAL: Restoring previous memory policy: 4 00:07:00.003 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.003 EAL: request: mp_malloc_sync 00:07:00.003 EAL: No shared files mode enabled, IPC is disabled 00:07:00.003 EAL: Heap on socket 0 was expanded by 1026MB 00:07:01.908 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.166 EAL: request: mp_malloc_sync 00:07:02.166 EAL: No shared files mode enabled, IPC is disabled 00:07:02.166 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:04.068 passed 00:07:04.068 00:07:04.068 Run Summary: Type Total Ran Passed Failed Inactive 00:07:04.068 suites 1 1 n/a 0 0 00:07:04.068 tests 2 2 2 0 0 00:07:04.068 asserts 5719 5719 5719 0 n/a 00:07:04.068 00:07:04.068 Elapsed time = 9.122 seconds 00:07:04.068 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.068 EAL: request: mp_malloc_sync 00:07:04.068 EAL: No shared files mode enabled, IPC is disabled 00:07:04.068 EAL: Heap on socket 0 was shrunk by 2MB 00:07:04.068 EAL: No shared files mode enabled, IPC is disabled 00:07:04.068 EAL: No shared files mode enabled, IPC is disabled 00:07:04.068 EAL: No shared files mode enabled, IPC is disabled 00:07:04.068 00:07:04.068 real 0m9.464s 00:07:04.068 user 0m7.982s 00:07:04.068 sys 0m1.321s 00:07:04.068 12:17:27 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.068 12:17:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:04.068 ************************************ 00:07:04.068 END TEST env_vtophys 00:07:04.068 ************************************ 00:07:04.068 12:17:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:04.068 12:17:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.068 12:17:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.068 12:17:27 env -- common/autotest_common.sh@10 -- # set +x 00:07:04.068 ************************************ 00:07:04.068 START TEST env_pci 00:07:04.068 ************************************ 00:07:04.068 12:17:27 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:04.068 00:07:04.068 00:07:04.068 CUnit - A unit testing framework for C - Version 2.1-3 00:07:04.068 http://cunit.sourceforge.net/ 00:07:04.068 00:07:04.068 00:07:04.068 Suite: pci 00:07:04.068 Test: pci_hook ...[2024-10-07 12:17:27.267936] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57961 has claimed it 00:07:04.068 passed 00:07:04.068 00:07:04.068 Run Summary: Type Total Ran Passed Failed Inactive 00:07:04.068 suites 1 1 n/a 0 0 00:07:04.068 tests 1 1 1 0 0 00:07:04.068 asserts 25 25 25 0 n/a 00:07:04.068 00:07:04.068 Elapsed time = 0.006 seconds 00:07:04.068 EAL: Cannot find device (10000:00:01.0) 00:07:04.068 EAL: Failed to attach device on primary process 00:07:04.068 00:07:04.068 real 0m0.108s 00:07:04.068 user 0m0.046s 00:07:04.068 sys 0m0.062s 00:07:04.068 12:17:27 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.068 12:17:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:04.068 ************************************ 00:07:04.068 END TEST env_pci 00:07:04.068 ************************************ 00:07:04.326 12:17:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:04.326 12:17:27 env -- env/env.sh@15 -- # uname 00:07:04.326 12:17:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:04.326 12:17:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:04.326 12:17:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:04.326 12:17:27 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:04.326 12:17:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.326 12:17:27 env -- common/autotest_common.sh@10 -- # set +x 00:07:04.326 ************************************ 00:07:04.326 START TEST env_dpdk_post_init 00:07:04.326 ************************************ 00:07:04.326 12:17:27 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:04.326 EAL: Detected CPU lcores: 10 00:07:04.326 EAL: Detected NUMA nodes: 1 00:07:04.326 EAL: Detected shared linkage of DPDK 00:07:04.326 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:04.326 EAL: Selected IOVA mode 'PA' 00:07:04.585 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:04.585 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:04.585 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:04.585 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:04.585 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:04.585 Starting DPDK initialization... 00:07:04.586 Starting SPDK post initialization... 00:07:04.586 SPDK NVMe probe 00:07:04.586 Attaching to 0000:00:10.0 00:07:04.586 Attaching to 0000:00:11.0 00:07:04.586 Attaching to 0000:00:12.0 00:07:04.586 Attaching to 0000:00:13.0 00:07:04.586 Attached to 0000:00:10.0 00:07:04.586 Attached to 0000:00:11.0 00:07:04.586 Attached to 0000:00:13.0 00:07:04.586 Attached to 0000:00:12.0 00:07:04.586 Cleaning up... 00:07:04.586 00:07:04.586 real 0m0.332s 00:07:04.586 user 0m0.115s 00:07:04.586 sys 0m0.119s 00:07:04.586 12:17:27 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.586 12:17:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:04.586 ************************************ 00:07:04.586 END TEST env_dpdk_post_init 00:07:04.586 ************************************ 00:07:04.586 12:17:27 env -- env/env.sh@26 -- # uname 00:07:04.586 12:17:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:04.586 12:17:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:04.586 12:17:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.586 12:17:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.586 12:17:27 env -- common/autotest_common.sh@10 -- # set +x 00:07:04.586 ************************************ 00:07:04.586 START TEST env_mem_callbacks 00:07:04.586 ************************************ 00:07:04.586 12:17:27 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:04.844 EAL: Detected CPU lcores: 10 00:07:04.844 EAL: Detected NUMA nodes: 1 00:07:04.844 EAL: Detected shared linkage of DPDK 00:07:04.844 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:04.844 EAL: Selected IOVA mode 'PA' 00:07:04.844 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:04.844 00:07:04.844 00:07:04.844 CUnit - A unit testing framework for C - Version 2.1-3 00:07:04.844 http://cunit.sourceforge.net/ 00:07:04.844 00:07:04.844 00:07:04.844 Suite: memory 00:07:04.844 Test: test ... 00:07:04.844 register 0x200000200000 2097152 00:07:04.844 malloc 3145728 00:07:04.844 register 0x200000400000 4194304 00:07:04.844 buf 0x2000004fffc0 len 3145728 PASSED 00:07:04.844 malloc 64 00:07:04.844 buf 0x2000004ffec0 len 64 PASSED 00:07:04.844 malloc 4194304 00:07:04.844 register 0x200000800000 6291456 00:07:04.844 buf 0x2000009fffc0 len 4194304 PASSED 00:07:04.844 free 0x2000004fffc0 3145728 00:07:04.844 free 0x2000004ffec0 64 00:07:04.844 unregister 0x200000400000 4194304 PASSED 00:07:04.844 free 0x2000009fffc0 4194304 00:07:04.844 unregister 0x200000800000 6291456 PASSED 00:07:04.844 malloc 8388608 00:07:04.844 register 0x200000400000 10485760 00:07:04.844 buf 0x2000005fffc0 len 8388608 PASSED 00:07:04.844 free 0x2000005fffc0 8388608 00:07:04.844 unregister 0x200000400000 10485760 PASSED 00:07:04.844 passed 00:07:04.844 00:07:04.844 Run Summary: Type Total Ran Passed Failed Inactive 00:07:04.844 suites 1 1 n/a 0 0 00:07:04.844 tests 1 1 1 0 0 00:07:04.844 asserts 15 15 15 0 n/a 00:07:04.844 00:07:04.844 Elapsed time = 0.079 seconds 00:07:04.844 00:07:04.844 real 0m0.289s 00:07:04.844 user 0m0.100s 00:07:04.844 sys 0m0.087s 00:07:04.844 12:17:28 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.844 12:17:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:04.844 ************************************ 00:07:04.844 END TEST env_mem_callbacks 00:07:04.844 ************************************ 00:07:05.102 00:07:05.102 real 0m11.103s 00:07:05.102 user 0m8.743s 00:07:05.102 sys 0m1.994s 00:07:05.102 12:17:28 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.102 12:17:28 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.102 ************************************ 00:07:05.102 END TEST env 00:07:05.102 ************************************ 00:07:05.102 12:17:28 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:05.102 12:17:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.102 12:17:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.102 12:17:28 -- common/autotest_common.sh@10 -- # set +x 00:07:05.102 ************************************ 00:07:05.102 START TEST rpc 00:07:05.102 ************************************ 00:07:05.102 12:17:28 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:05.102 * Looking for test storage... 00:07:05.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:05.102 12:17:28 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.102 12:17:28 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.102 12:17:28 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.360 12:17:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.360 12:17:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.360 12:17:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.360 12:17:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.360 12:17:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.360 12:17:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.360 12:17:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.360 12:17:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.360 12:17:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.360 12:17:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.360 12:17:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.360 12:17:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:05.360 12:17:28 rpc -- scripts/common.sh@345 -- # : 1 00:07:05.360 12:17:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.360 12:17:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.360 12:17:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:05.360 12:17:28 rpc -- scripts/common.sh@353 -- # local d=1 00:07:05.360 12:17:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.360 12:17:28 rpc -- scripts/common.sh@355 -- # echo 1 00:07:05.360 12:17:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.360 12:17:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:05.360 12:17:28 rpc -- scripts/common.sh@353 -- # local d=2 00:07:05.360 12:17:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.360 12:17:28 rpc -- scripts/common.sh@355 -- # echo 2 00:07:05.360 12:17:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.360 12:17:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.360 12:17:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.360 12:17:28 rpc -- scripts/common.sh@368 -- # return 0 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.360 --rc genhtml_branch_coverage=1 00:07:05.360 --rc genhtml_function_coverage=1 00:07:05.360 --rc genhtml_legend=1 00:07:05.360 --rc geninfo_all_blocks=1 00:07:05.360 --rc geninfo_unexecuted_blocks=1 00:07:05.360 00:07:05.360 ' 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.360 --rc genhtml_branch_coverage=1 00:07:05.360 --rc genhtml_function_coverage=1 00:07:05.360 --rc genhtml_legend=1 00:07:05.360 --rc geninfo_all_blocks=1 00:07:05.360 --rc geninfo_unexecuted_blocks=1 00:07:05.360 00:07:05.360 ' 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.360 --rc genhtml_branch_coverage=1 00:07:05.360 --rc genhtml_function_coverage=1 00:07:05.360 --rc genhtml_legend=1 00:07:05.360 --rc geninfo_all_blocks=1 00:07:05.360 --rc geninfo_unexecuted_blocks=1 00:07:05.360 00:07:05.360 ' 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.360 --rc genhtml_branch_coverage=1 00:07:05.360 --rc genhtml_function_coverage=1 00:07:05.360 --rc genhtml_legend=1 00:07:05.360 --rc geninfo_all_blocks=1 00:07:05.360 --rc geninfo_unexecuted_blocks=1 00:07:05.360 00:07:05.360 ' 00:07:05.360 12:17:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58088 00:07:05.360 12:17:28 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:05.360 12:17:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.360 12:17:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58088 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@831 -- # '[' -z 58088 ']' 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.360 12:17:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.360 [2024-10-07 12:17:28.581381] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:05.360 [2024-10-07 12:17:28.581500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58088 ] 00:07:05.618 [2024-10-07 12:17:28.752471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.876 [2024-10-07 12:17:28.955237] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:05.876 [2024-10-07 12:17:28.955294] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58088' to capture a snapshot of events at runtime. 00:07:05.877 [2024-10-07 12:17:28.955324] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.877 [2024-10-07 12:17:28.955337] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.877 [2024-10-07 12:17:28.955347] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58088 for offline analysis/debug. 00:07:05.877 [2024-10-07 12:17:28.956609] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.812 12:17:29 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.812 12:17:29 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:06.812 12:17:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:06.812 12:17:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:06.812 12:17:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:06.812 12:17:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:06.812 12:17:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.812 12:17:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.812 12:17:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.812 ************************************ 00:07:06.812 START TEST rpc_integrity 00:07:06.812 ************************************ 00:07:06.812 12:17:29 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:06.812 12:17:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:06.812 12:17:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.812 12:17:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.812 12:17:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.812 12:17:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:06.812 12:17:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:06.812 12:17:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:06.812 12:17:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:06.812 12:17:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.812 12:17:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.812 12:17:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.812 12:17:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:06.812 12:17:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:06.812 12:17:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.812 12:17:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.812 12:17:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.812 12:17:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:06.812 { 00:07:06.812 "name": "Malloc0", 00:07:06.812 "aliases": [ 00:07:06.812 "6ebcb203-e203-467c-9227-1cae77669269" 00:07:06.812 ], 00:07:06.812 "product_name": "Malloc disk", 00:07:06.812 "block_size": 512, 00:07:06.812 "num_blocks": 16384, 00:07:06.812 "uuid": "6ebcb203-e203-467c-9227-1cae77669269", 00:07:06.812 "assigned_rate_limits": { 00:07:06.812 "rw_ios_per_sec": 0, 00:07:06.812 "rw_mbytes_per_sec": 0, 00:07:06.812 "r_mbytes_per_sec": 0, 00:07:06.812 "w_mbytes_per_sec": 0 00:07:06.812 }, 00:07:06.812 "claimed": false, 00:07:06.812 "zoned": false, 00:07:06.812 "supported_io_types": { 00:07:06.812 "read": true, 00:07:06.812 "write": true, 00:07:06.812 "unmap": true, 00:07:06.812 "flush": true, 00:07:06.812 "reset": true, 00:07:06.812 "nvme_admin": false, 00:07:06.812 "nvme_io": false, 00:07:06.812 "nvme_io_md": false, 00:07:06.812 "write_zeroes": true, 00:07:06.812 "zcopy": true, 00:07:06.812 "get_zone_info": false, 00:07:06.812 "zone_management": false, 00:07:06.812 "zone_append": false, 00:07:06.812 "compare": false, 00:07:06.812 "compare_and_write": false, 00:07:06.812 "abort": true, 00:07:06.812 "seek_hole": false, 00:07:06.812 "seek_data": false, 00:07:06.812 "copy": true, 00:07:06.812 "nvme_iov_md": false 00:07:06.812 }, 00:07:06.812 "memory_domains": [ 00:07:06.812 { 00:07:06.812 "dma_device_id": "system", 00:07:06.812 "dma_device_type": 1 00:07:06.812 }, 00:07:06.812 { 00:07:06.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.812 "dma_device_type": 2 00:07:06.812 } 00:07:06.812 ], 00:07:06.812 "driver_specific": {} 00:07:06.812 } 00:07:06.812 ]' 00:07:06.812 12:17:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:06.812 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:06.812 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:06.812 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.812 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.812 [2024-10-07 12:17:30.021953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:06.812 [2024-10-07 12:17:30.022011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:06.812 [2024-10-07 12:17:30.022054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:06.812 [2024-10-07 12:17:30.022072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:06.812 [2024-10-07 12:17:30.024499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:06.812 [2024-10-07 12:17:30.024546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:06.812 Passthru0 00:07:06.812 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.812 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:06.812 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.812 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.812 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.812 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:06.812 { 00:07:06.812 "name": "Malloc0", 00:07:06.812 "aliases": [ 00:07:06.812 "6ebcb203-e203-467c-9227-1cae77669269" 00:07:06.812 ], 00:07:06.812 "product_name": "Malloc disk", 00:07:06.812 "block_size": 512, 00:07:06.812 "num_blocks": 16384, 00:07:06.812 "uuid": "6ebcb203-e203-467c-9227-1cae77669269", 00:07:06.812 "assigned_rate_limits": { 00:07:06.812 "rw_ios_per_sec": 0, 00:07:06.812 "rw_mbytes_per_sec": 0, 00:07:06.812 "r_mbytes_per_sec": 0, 00:07:06.812 "w_mbytes_per_sec": 0 00:07:06.812 }, 00:07:06.812 "claimed": true, 00:07:06.812 "claim_type": "exclusive_write", 00:07:06.812 "zoned": false, 00:07:06.812 "supported_io_types": { 00:07:06.812 "read": true, 00:07:06.812 "write": true, 00:07:06.813 "unmap": true, 00:07:06.813 "flush": true, 00:07:06.813 "reset": true, 00:07:06.813 "nvme_admin": false, 00:07:06.813 "nvme_io": false, 00:07:06.813 "nvme_io_md": false, 00:07:06.813 "write_zeroes": true, 00:07:06.813 "zcopy": true, 00:07:06.813 "get_zone_info": false, 00:07:06.813 "zone_management": false, 00:07:06.813 "zone_append": false, 00:07:06.813 "compare": false, 00:07:06.813 "compare_and_write": false, 00:07:06.813 "abort": true, 00:07:06.813 "seek_hole": false, 00:07:06.813 "seek_data": false, 00:07:06.813 "copy": true, 00:07:06.813 "nvme_iov_md": false 00:07:06.813 }, 00:07:06.813 "memory_domains": [ 00:07:06.813 { 00:07:06.813 "dma_device_id": "system", 00:07:06.813 "dma_device_type": 1 00:07:06.813 }, 00:07:06.813 { 00:07:06.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.813 "dma_device_type": 2 00:07:06.813 } 00:07:06.813 ], 00:07:06.813 "driver_specific": {} 00:07:06.813 }, 00:07:06.813 { 00:07:06.813 "name": "Passthru0", 00:07:06.813 "aliases": [ 00:07:06.813 "c2076df0-7750-548d-84c0-a54d79c2350e" 00:07:06.813 ], 00:07:06.813 "product_name": "passthru", 00:07:06.813 "block_size": 512, 00:07:06.813 "num_blocks": 16384, 00:07:06.813 "uuid": "c2076df0-7750-548d-84c0-a54d79c2350e", 00:07:06.813 "assigned_rate_limits": { 00:07:06.813 "rw_ios_per_sec": 0, 00:07:06.813 "rw_mbytes_per_sec": 0, 00:07:06.813 "r_mbytes_per_sec": 0, 00:07:06.813 "w_mbytes_per_sec": 0 00:07:06.813 }, 00:07:06.813 "claimed": false, 00:07:06.813 "zoned": false, 00:07:06.813 "supported_io_types": { 00:07:06.813 "read": true, 00:07:06.813 "write": true, 00:07:06.813 "unmap": true, 00:07:06.813 "flush": true, 00:07:06.813 "reset": true, 00:07:06.813 "nvme_admin": false, 00:07:06.813 "nvme_io": false, 00:07:06.813 "nvme_io_md": false, 00:07:06.813 "write_zeroes": true, 00:07:06.813 "zcopy": true, 00:07:06.813 "get_zone_info": false, 00:07:06.813 "zone_management": false, 00:07:06.813 "zone_append": false, 00:07:06.813 "compare": false, 00:07:06.813 "compare_and_write": false, 00:07:06.813 "abort": true, 00:07:06.813 "seek_hole": false, 00:07:06.813 "seek_data": false, 00:07:06.813 "copy": true, 00:07:06.813 "nvme_iov_md": false 00:07:06.813 }, 00:07:06.813 "memory_domains": [ 00:07:06.813 { 00:07:06.813 "dma_device_id": "system", 00:07:06.813 "dma_device_type": 1 00:07:06.813 }, 00:07:06.813 { 00:07:06.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.813 "dma_device_type": 2 00:07:06.813 } 00:07:06.813 ], 00:07:06.813 "driver_specific": { 00:07:06.813 "passthru": { 00:07:06.813 "name": "Passthru0", 00:07:06.813 "base_bdev_name": "Malloc0" 00:07:06.813 } 00:07:06.813 } 00:07:06.813 } 00:07:06.813 ]' 00:07:06.813 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:07.095 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:07.095 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.095 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.095 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.095 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:07.095 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:07.095 12:17:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:07.095 00:07:07.095 real 0m0.338s 00:07:07.095 user 0m0.182s 00:07:07.095 sys 0m0.064s 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.095 12:17:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.095 ************************************ 00:07:07.095 END TEST rpc_integrity 00:07:07.095 ************************************ 00:07:07.095 12:17:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:07.095 12:17:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.095 12:17:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.095 12:17:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.095 ************************************ 00:07:07.095 START TEST rpc_plugins 00:07:07.095 ************************************ 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:07.095 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.095 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:07.095 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.095 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:07.095 { 00:07:07.095 "name": "Malloc1", 00:07:07.095 "aliases": [ 00:07:07.095 "a5497814-0ec7-4f84-a8f3-ed46cf1e56ec" 00:07:07.095 ], 00:07:07.095 "product_name": "Malloc disk", 00:07:07.095 "block_size": 4096, 00:07:07.095 "num_blocks": 256, 00:07:07.095 "uuid": "a5497814-0ec7-4f84-a8f3-ed46cf1e56ec", 00:07:07.095 "assigned_rate_limits": { 00:07:07.095 "rw_ios_per_sec": 0, 00:07:07.095 "rw_mbytes_per_sec": 0, 00:07:07.095 "r_mbytes_per_sec": 0, 00:07:07.095 "w_mbytes_per_sec": 0 00:07:07.095 }, 00:07:07.095 "claimed": false, 00:07:07.095 "zoned": false, 00:07:07.095 "supported_io_types": { 00:07:07.095 "read": true, 00:07:07.095 "write": true, 00:07:07.095 "unmap": true, 00:07:07.095 "flush": true, 00:07:07.095 "reset": true, 00:07:07.095 "nvme_admin": false, 00:07:07.095 "nvme_io": false, 00:07:07.095 "nvme_io_md": false, 00:07:07.095 "write_zeroes": true, 00:07:07.095 "zcopy": true, 00:07:07.095 "get_zone_info": false, 00:07:07.095 "zone_management": false, 00:07:07.095 "zone_append": false, 00:07:07.095 "compare": false, 00:07:07.095 "compare_and_write": false, 00:07:07.095 "abort": true, 00:07:07.095 "seek_hole": false, 00:07:07.095 "seek_data": false, 00:07:07.095 "copy": true, 00:07:07.095 "nvme_iov_md": false 00:07:07.095 }, 00:07:07.095 "memory_domains": [ 00:07:07.095 { 00:07:07.095 "dma_device_id": "system", 00:07:07.095 "dma_device_type": 1 00:07:07.095 }, 00:07:07.095 { 00:07:07.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.095 "dma_device_type": 2 00:07:07.095 } 00:07:07.095 ], 00:07:07.095 "driver_specific": {} 00:07:07.095 } 00:07:07.095 ]' 00:07:07.095 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:07.095 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:07.095 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.095 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.095 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.095 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:07.095 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:07.355 12:17:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:07.355 00:07:07.355 real 0m0.155s 00:07:07.355 user 0m0.101s 00:07:07.355 sys 0m0.022s 00:07:07.355 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.355 12:17:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.355 ************************************ 00:07:07.355 END TEST rpc_plugins 00:07:07.355 ************************************ 00:07:07.355 12:17:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:07.355 12:17:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.355 12:17:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.355 12:17:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.355 ************************************ 00:07:07.355 START TEST rpc_trace_cmd_test 00:07:07.355 ************************************ 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:07.355 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58088", 00:07:07.355 "tpoint_group_mask": "0x8", 00:07:07.355 "iscsi_conn": { 00:07:07.355 "mask": "0x2", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "scsi": { 00:07:07.355 "mask": "0x4", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "bdev": { 00:07:07.355 "mask": "0x8", 00:07:07.355 "tpoint_mask": "0xffffffffffffffff" 00:07:07.355 }, 00:07:07.355 "nvmf_rdma": { 00:07:07.355 "mask": "0x10", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "nvmf_tcp": { 00:07:07.355 "mask": "0x20", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "ftl": { 00:07:07.355 "mask": "0x40", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "blobfs": { 00:07:07.355 "mask": "0x80", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "dsa": { 00:07:07.355 "mask": "0x200", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "thread": { 00:07:07.355 "mask": "0x400", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "nvme_pcie": { 00:07:07.355 "mask": "0x800", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "iaa": { 00:07:07.355 "mask": "0x1000", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "nvme_tcp": { 00:07:07.355 "mask": "0x2000", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "bdev_nvme": { 00:07:07.355 "mask": "0x4000", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "sock": { 00:07:07.355 "mask": "0x8000", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "blob": { 00:07:07.355 "mask": "0x10000", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "bdev_raid": { 00:07:07.355 "mask": "0x20000", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 }, 00:07:07.355 "scheduler": { 00:07:07.355 "mask": "0x40000", 00:07:07.355 "tpoint_mask": "0x0" 00:07:07.355 } 00:07:07.355 }' 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:07.355 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:07.615 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:07.615 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:07.615 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:07.615 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:07.615 12:17:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:07.615 00:07:07.615 real 0m0.201s 00:07:07.615 user 0m0.156s 00:07:07.615 sys 0m0.036s 00:07:07.615 12:17:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.615 ************************************ 00:07:07.615 END TEST rpc_trace_cmd_test 00:07:07.615 ************************************ 00:07:07.615 12:17:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 12:17:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:07.615 12:17:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:07.615 12:17:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:07.615 12:17:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.615 12:17:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.615 12:17:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 ************************************ 00:07:07.615 START TEST rpc_daemon_integrity 00:07:07.615 ************************************ 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:07.615 { 00:07:07.615 "name": "Malloc2", 00:07:07.615 "aliases": [ 00:07:07.615 "2928f166-34f3-4c6a-8d86-5709fbeb41b6" 00:07:07.615 ], 00:07:07.615 "product_name": "Malloc disk", 00:07:07.615 "block_size": 512, 00:07:07.615 "num_blocks": 16384, 00:07:07.615 "uuid": "2928f166-34f3-4c6a-8d86-5709fbeb41b6", 00:07:07.615 "assigned_rate_limits": { 00:07:07.615 "rw_ios_per_sec": 0, 00:07:07.615 "rw_mbytes_per_sec": 0, 00:07:07.615 "r_mbytes_per_sec": 0, 00:07:07.615 "w_mbytes_per_sec": 0 00:07:07.615 }, 00:07:07.615 "claimed": false, 00:07:07.615 "zoned": false, 00:07:07.615 "supported_io_types": { 00:07:07.615 "read": true, 00:07:07.615 "write": true, 00:07:07.615 "unmap": true, 00:07:07.615 "flush": true, 00:07:07.615 "reset": true, 00:07:07.615 "nvme_admin": false, 00:07:07.615 "nvme_io": false, 00:07:07.615 "nvme_io_md": false, 00:07:07.615 "write_zeroes": true, 00:07:07.615 "zcopy": true, 00:07:07.615 "get_zone_info": false, 00:07:07.615 "zone_management": false, 00:07:07.615 "zone_append": false, 00:07:07.615 "compare": false, 00:07:07.615 "compare_and_write": false, 00:07:07.615 "abort": true, 00:07:07.615 "seek_hole": false, 00:07:07.615 "seek_data": false, 00:07:07.615 "copy": true, 00:07:07.615 "nvme_iov_md": false 00:07:07.615 }, 00:07:07.615 "memory_domains": [ 00:07:07.615 { 00:07:07.615 "dma_device_id": "system", 00:07:07.615 "dma_device_type": 1 00:07:07.615 }, 00:07:07.615 { 00:07:07.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.615 "dma_device_type": 2 00:07:07.615 } 00:07:07.615 ], 00:07:07.615 "driver_specific": {} 00:07:07.615 } 00:07:07.615 ]' 00:07:07.615 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.876 [2024-10-07 12:17:30.928736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:07.876 [2024-10-07 12:17:30.928801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.876 [2024-10-07 12:17:30.928825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:07.876 [2024-10-07 12:17:30.928839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.876 [2024-10-07 12:17:30.931378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.876 [2024-10-07 12:17:30.931424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:07.876 Passthru0 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:07.876 { 00:07:07.876 "name": "Malloc2", 00:07:07.876 "aliases": [ 00:07:07.876 "2928f166-34f3-4c6a-8d86-5709fbeb41b6" 00:07:07.876 ], 00:07:07.876 "product_name": "Malloc disk", 00:07:07.876 "block_size": 512, 00:07:07.876 "num_blocks": 16384, 00:07:07.876 "uuid": "2928f166-34f3-4c6a-8d86-5709fbeb41b6", 00:07:07.876 "assigned_rate_limits": { 00:07:07.876 "rw_ios_per_sec": 0, 00:07:07.876 "rw_mbytes_per_sec": 0, 00:07:07.876 "r_mbytes_per_sec": 0, 00:07:07.876 "w_mbytes_per_sec": 0 00:07:07.876 }, 00:07:07.876 "claimed": true, 00:07:07.876 "claim_type": "exclusive_write", 00:07:07.876 "zoned": false, 00:07:07.876 "supported_io_types": { 00:07:07.876 "read": true, 00:07:07.876 "write": true, 00:07:07.876 "unmap": true, 00:07:07.876 "flush": true, 00:07:07.876 "reset": true, 00:07:07.876 "nvme_admin": false, 00:07:07.876 "nvme_io": false, 00:07:07.876 "nvme_io_md": false, 00:07:07.876 "write_zeroes": true, 00:07:07.876 "zcopy": true, 00:07:07.876 "get_zone_info": false, 00:07:07.876 "zone_management": false, 00:07:07.876 "zone_append": false, 00:07:07.876 "compare": false, 00:07:07.876 "compare_and_write": false, 00:07:07.876 "abort": true, 00:07:07.876 "seek_hole": false, 00:07:07.876 "seek_data": false, 00:07:07.876 "copy": true, 00:07:07.876 "nvme_iov_md": false 00:07:07.876 }, 00:07:07.876 "memory_domains": [ 00:07:07.876 { 00:07:07.876 "dma_device_id": "system", 00:07:07.876 "dma_device_type": 1 00:07:07.876 }, 00:07:07.876 { 00:07:07.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.876 "dma_device_type": 2 00:07:07.876 } 00:07:07.876 ], 00:07:07.876 "driver_specific": {} 00:07:07.876 }, 00:07:07.876 { 00:07:07.876 "name": "Passthru0", 00:07:07.876 "aliases": [ 00:07:07.876 "c3dea87d-e175-59ed-b198-ae05d673cb96" 00:07:07.876 ], 00:07:07.876 "product_name": "passthru", 00:07:07.876 "block_size": 512, 00:07:07.876 "num_blocks": 16384, 00:07:07.876 "uuid": "c3dea87d-e175-59ed-b198-ae05d673cb96", 00:07:07.876 "assigned_rate_limits": { 00:07:07.876 "rw_ios_per_sec": 0, 00:07:07.876 "rw_mbytes_per_sec": 0, 00:07:07.876 "r_mbytes_per_sec": 0, 00:07:07.876 "w_mbytes_per_sec": 0 00:07:07.876 }, 00:07:07.876 "claimed": false, 00:07:07.876 "zoned": false, 00:07:07.876 "supported_io_types": { 00:07:07.876 "read": true, 00:07:07.876 "write": true, 00:07:07.876 "unmap": true, 00:07:07.876 "flush": true, 00:07:07.876 "reset": true, 00:07:07.876 "nvme_admin": false, 00:07:07.876 "nvme_io": false, 00:07:07.876 "nvme_io_md": false, 00:07:07.876 "write_zeroes": true, 00:07:07.876 "zcopy": true, 00:07:07.876 "get_zone_info": false, 00:07:07.876 "zone_management": false, 00:07:07.876 "zone_append": false, 00:07:07.876 "compare": false, 00:07:07.876 "compare_and_write": false, 00:07:07.876 "abort": true, 00:07:07.876 "seek_hole": false, 00:07:07.876 "seek_data": false, 00:07:07.876 "copy": true, 00:07:07.876 "nvme_iov_md": false 00:07:07.876 }, 00:07:07.876 "memory_domains": [ 00:07:07.876 { 00:07:07.876 "dma_device_id": "system", 00:07:07.876 "dma_device_type": 1 00:07:07.876 }, 00:07:07.876 { 00:07:07.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.876 "dma_device_type": 2 00:07:07.876 } 00:07:07.876 ], 00:07:07.876 "driver_specific": { 00:07:07.876 "passthru": { 00:07:07.876 "name": "Passthru0", 00:07:07.876 "base_bdev_name": "Malloc2" 00:07:07.876 } 00:07:07.876 } 00:07:07.876 } 00:07:07.876 ]' 00:07:07.876 12:17:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:07.876 00:07:07.876 real 0m0.341s 00:07:07.876 user 0m0.185s 00:07:07.876 sys 0m0.063s 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.876 ************************************ 00:07:07.876 END TEST rpc_daemon_integrity 00:07:07.876 ************************************ 00:07:07.876 12:17:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.138 12:17:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:08.138 12:17:31 rpc -- rpc/rpc.sh@84 -- # killprocess 58088 00:07:08.138 12:17:31 rpc -- common/autotest_common.sh@950 -- # '[' -z 58088 ']' 00:07:08.138 12:17:31 rpc -- common/autotest_common.sh@954 -- # kill -0 58088 00:07:08.138 12:17:31 rpc -- common/autotest_common.sh@955 -- # uname 00:07:08.138 12:17:31 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.138 12:17:31 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58088 00:07:08.138 12:17:31 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.138 12:17:31 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.138 killing process with pid 58088 00:07:08.138 12:17:31 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58088' 00:07:08.138 12:17:31 rpc -- common/autotest_common.sh@969 -- # kill 58088 00:07:08.138 12:17:31 rpc -- common/autotest_common.sh@974 -- # wait 58088 00:07:10.666 00:07:10.666 real 0m5.486s 00:07:10.666 user 0m5.914s 00:07:10.666 sys 0m1.020s 00:07:10.666 12:17:33 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.666 12:17:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 ************************************ 00:07:10.666 END TEST rpc 00:07:10.666 ************************************ 00:07:10.666 12:17:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:10.666 12:17:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.666 12:17:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.666 12:17:33 -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 ************************************ 00:07:10.666 START TEST skip_rpc 00:07:10.666 ************************************ 00:07:10.667 12:17:33 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:10.667 * Looking for test storage... 00:07:10.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:10.667 12:17:33 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:10.667 12:17:33 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:10.667 12:17:33 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:10.925 12:17:34 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:10.925 12:17:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.925 12:17:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.925 12:17:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.925 12:17:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.925 12:17:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.925 12:17:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.925 12:17:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.926 12:17:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:10.926 12:17:34 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.926 12:17:34 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:10.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.926 --rc genhtml_branch_coverage=1 00:07:10.926 --rc genhtml_function_coverage=1 00:07:10.926 --rc genhtml_legend=1 00:07:10.926 --rc geninfo_all_blocks=1 00:07:10.926 --rc geninfo_unexecuted_blocks=1 00:07:10.926 00:07:10.926 ' 00:07:10.926 12:17:34 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:10.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.926 --rc genhtml_branch_coverage=1 00:07:10.926 --rc genhtml_function_coverage=1 00:07:10.926 --rc genhtml_legend=1 00:07:10.926 --rc geninfo_all_blocks=1 00:07:10.926 --rc geninfo_unexecuted_blocks=1 00:07:10.926 00:07:10.926 ' 00:07:10.926 12:17:34 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:10.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.926 --rc genhtml_branch_coverage=1 00:07:10.926 --rc genhtml_function_coverage=1 00:07:10.926 --rc genhtml_legend=1 00:07:10.926 --rc geninfo_all_blocks=1 00:07:10.926 --rc geninfo_unexecuted_blocks=1 00:07:10.926 00:07:10.926 ' 00:07:10.926 12:17:34 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:10.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.926 --rc genhtml_branch_coverage=1 00:07:10.926 --rc genhtml_function_coverage=1 00:07:10.926 --rc genhtml_legend=1 00:07:10.926 --rc geninfo_all_blocks=1 00:07:10.926 --rc geninfo_unexecuted_blocks=1 00:07:10.926 00:07:10.926 ' 00:07:10.926 12:17:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:10.926 12:17:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:10.926 12:17:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:10.926 12:17:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.926 12:17:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.926 12:17:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.926 ************************************ 00:07:10.926 START TEST skip_rpc 00:07:10.926 ************************************ 00:07:10.926 12:17:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:10.926 12:17:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:10.926 12:17:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58328 00:07:10.926 12:17:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.926 12:17:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:10.926 [2024-10-07 12:17:34.149560] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:10.926 [2024-10-07 12:17:34.149840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58328 ] 00:07:11.185 [2024-10-07 12:17:34.322880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.444 [2024-10-07 12:17:34.523526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58328 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58328 ']' 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58328 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58328 00:07:16.713 killing process with pid 58328 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58328' 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58328 00:07:16.713 12:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58328 00:07:18.619 00:07:18.619 real 0m7.519s 00:07:18.619 user 0m7.037s 00:07:18.619 sys 0m0.408s 00:07:18.619 12:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.619 ************************************ 00:07:18.619 END TEST skip_rpc 00:07:18.619 ************************************ 00:07:18.619 12:17:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.619 12:17:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:18.619 12:17:41 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.619 12:17:41 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.619 12:17:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.619 ************************************ 00:07:18.619 START TEST skip_rpc_with_json 00:07:18.619 ************************************ 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58432 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58432 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58432 ']' 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.619 12:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.619 [2024-10-07 12:17:41.745990] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:18.619 [2024-10-07 12:17:41.746333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58432 ] 00:07:18.878 [2024-10-07 12:17:41.918395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.878 [2024-10-07 12:17:42.112181] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.848 12:17:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.848 12:17:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:19.848 12:17:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:19.848 12:17:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.848 12:17:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:19.848 [2024-10-07 12:17:42.940580] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:19.848 request: 00:07:19.848 { 00:07:19.848 "trtype": "tcp", 00:07:19.848 "method": "nvmf_get_transports", 00:07:19.848 "req_id": 1 00:07:19.849 } 00:07:19.849 Got JSON-RPC error response 00:07:19.849 response: 00:07:19.849 { 00:07:19.849 "code": -19, 00:07:19.849 "message": "No such device" 00:07:19.849 } 00:07:19.849 12:17:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:19.849 12:17:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:19.849 12:17:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.849 12:17:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:19.849 [2024-10-07 12:17:42.956636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.849 12:17:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.849 12:17:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:19.849 12:17:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.849 12:17:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:20.110 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.110 12:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:20.110 { 00:07:20.110 "subsystems": [ 00:07:20.110 { 00:07:20.110 "subsystem": "fsdev", 00:07:20.110 "config": [ 00:07:20.110 { 00:07:20.110 "method": "fsdev_set_opts", 00:07:20.110 "params": { 00:07:20.110 "fsdev_io_pool_size": 65535, 00:07:20.110 "fsdev_io_cache_size": 256 00:07:20.110 } 00:07:20.110 } 00:07:20.110 ] 00:07:20.110 }, 00:07:20.110 { 00:07:20.110 "subsystem": "keyring", 00:07:20.110 "config": [] 00:07:20.110 }, 00:07:20.110 { 00:07:20.110 "subsystem": "iobuf", 00:07:20.110 "config": [ 00:07:20.110 { 00:07:20.110 "method": "iobuf_set_options", 00:07:20.110 "params": { 00:07:20.110 "small_pool_count": 8192, 00:07:20.110 "large_pool_count": 1024, 00:07:20.110 "small_bufsize": 8192, 00:07:20.110 "large_bufsize": 135168 00:07:20.110 } 00:07:20.110 } 00:07:20.110 ] 00:07:20.110 }, 00:07:20.110 { 00:07:20.110 "subsystem": "sock", 00:07:20.110 "config": [ 00:07:20.110 { 00:07:20.110 "method": "sock_set_default_impl", 00:07:20.110 "params": { 00:07:20.110 "impl_name": "posix" 00:07:20.110 } 00:07:20.110 }, 00:07:20.110 { 00:07:20.110 "method": "sock_impl_set_options", 00:07:20.110 "params": { 00:07:20.110 "impl_name": "ssl", 00:07:20.110 "recv_buf_size": 4096, 00:07:20.110 "send_buf_size": 4096, 00:07:20.110 "enable_recv_pipe": true, 00:07:20.110 "enable_quickack": false, 00:07:20.110 "enable_placement_id": 0, 00:07:20.110 "enable_zerocopy_send_server": true, 00:07:20.110 "enable_zerocopy_send_client": false, 00:07:20.110 "zerocopy_threshold": 0, 00:07:20.110 "tls_version": 0, 00:07:20.110 "enable_ktls": false 00:07:20.110 } 00:07:20.110 }, 00:07:20.110 { 00:07:20.110 "method": "sock_impl_set_options", 00:07:20.110 "params": { 00:07:20.110 "impl_name": "posix", 00:07:20.110 "recv_buf_size": 2097152, 00:07:20.110 "send_buf_size": 2097152, 00:07:20.110 "enable_recv_pipe": true, 00:07:20.110 "enable_quickack": false, 00:07:20.110 "enable_placement_id": 0, 00:07:20.110 "enable_zerocopy_send_server": true, 00:07:20.110 "enable_zerocopy_send_client": false, 00:07:20.110 "zerocopy_threshold": 0, 00:07:20.110 "tls_version": 0, 00:07:20.110 "enable_ktls": false 00:07:20.110 } 00:07:20.110 } 00:07:20.110 ] 00:07:20.110 }, 00:07:20.110 { 00:07:20.110 "subsystem": "vmd", 00:07:20.110 "config": [] 00:07:20.110 }, 00:07:20.110 { 00:07:20.110 "subsystem": "accel", 00:07:20.110 "config": [ 00:07:20.110 { 00:07:20.110 "method": "accel_set_options", 00:07:20.110 "params": { 00:07:20.110 "small_cache_size": 128, 00:07:20.111 "large_cache_size": 16, 00:07:20.111 "task_count": 2048, 00:07:20.111 "sequence_count": 2048, 00:07:20.111 "buf_count": 2048 00:07:20.111 } 00:07:20.111 } 00:07:20.111 ] 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "subsystem": "bdev", 00:07:20.111 "config": [ 00:07:20.111 { 00:07:20.111 "method": "bdev_set_options", 00:07:20.111 "params": { 00:07:20.111 "bdev_io_pool_size": 65535, 00:07:20.111 "bdev_io_cache_size": 256, 00:07:20.111 "bdev_auto_examine": true, 00:07:20.111 "iobuf_small_cache_size": 128, 00:07:20.111 "iobuf_large_cache_size": 16 00:07:20.111 } 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "method": "bdev_raid_set_options", 00:07:20.111 "params": { 00:07:20.111 "process_window_size_kb": 1024, 00:07:20.111 "process_max_bandwidth_mb_sec": 0 00:07:20.111 } 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "method": "bdev_iscsi_set_options", 00:07:20.111 "params": { 00:07:20.111 "timeout_sec": 30 00:07:20.111 } 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "method": "bdev_nvme_set_options", 00:07:20.111 "params": { 00:07:20.111 "action_on_timeout": "none", 00:07:20.111 "timeout_us": 0, 00:07:20.111 "timeout_admin_us": 0, 00:07:20.111 "keep_alive_timeout_ms": 10000, 00:07:20.111 "arbitration_burst": 0, 00:07:20.111 "low_priority_weight": 0, 00:07:20.111 "medium_priority_weight": 0, 00:07:20.111 "high_priority_weight": 0, 00:07:20.111 "nvme_adminq_poll_period_us": 10000, 00:07:20.111 "nvme_ioq_poll_period_us": 0, 00:07:20.111 "io_queue_requests": 0, 00:07:20.111 "delay_cmd_submit": true, 00:07:20.111 "transport_retry_count": 4, 00:07:20.111 "bdev_retry_count": 3, 00:07:20.111 "transport_ack_timeout": 0, 00:07:20.111 "ctrlr_loss_timeout_sec": 0, 00:07:20.111 "reconnect_delay_sec": 0, 00:07:20.111 "fast_io_fail_timeout_sec": 0, 00:07:20.111 "disable_auto_failback": false, 00:07:20.111 "generate_uuids": false, 00:07:20.111 "transport_tos": 0, 00:07:20.111 "nvme_error_stat": false, 00:07:20.111 "rdma_srq_size": 0, 00:07:20.111 "io_path_stat": false, 00:07:20.111 "allow_accel_sequence": false, 00:07:20.111 "rdma_max_cq_size": 0, 00:07:20.111 "rdma_cm_event_timeout_ms": 0, 00:07:20.111 "dhchap_digests": [ 00:07:20.111 "sha256", 00:07:20.111 "sha384", 00:07:20.111 "sha512" 00:07:20.111 ], 00:07:20.111 "dhchap_dhgroups": [ 00:07:20.111 "null", 00:07:20.111 "ffdhe2048", 00:07:20.111 "ffdhe3072", 00:07:20.111 "ffdhe4096", 00:07:20.111 "ffdhe6144", 00:07:20.111 "ffdhe8192" 00:07:20.111 ] 00:07:20.111 } 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "method": "bdev_nvme_set_hotplug", 00:07:20.111 "params": { 00:07:20.111 "period_us": 100000, 00:07:20.111 "enable": false 00:07:20.111 } 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "method": "bdev_wait_for_examine" 00:07:20.111 } 00:07:20.111 ] 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "subsystem": "scsi", 00:07:20.111 "config": null 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "subsystem": "scheduler", 00:07:20.111 "config": [ 00:07:20.111 { 00:07:20.111 "method": "framework_set_scheduler", 00:07:20.111 "params": { 00:07:20.111 "name": "static" 00:07:20.111 } 00:07:20.111 } 00:07:20.111 ] 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "subsystem": "vhost_scsi", 00:07:20.111 "config": [] 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "subsystem": "vhost_blk", 00:07:20.111 "config": [] 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "subsystem": "ublk", 00:07:20.111 "config": [] 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "subsystem": "nbd", 00:07:20.111 "config": [] 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "subsystem": "nvmf", 00:07:20.111 "config": [ 00:07:20.111 { 00:07:20.111 "method": "nvmf_set_config", 00:07:20.111 "params": { 00:07:20.111 "discovery_filter": "match_any", 00:07:20.111 "admin_cmd_passthru": { 00:07:20.111 "identify_ctrlr": false 00:07:20.111 }, 00:07:20.111 "dhchap_digests": [ 00:07:20.111 "sha256", 00:07:20.111 "sha384", 00:07:20.111 "sha512" 00:07:20.111 ], 00:07:20.111 "dhchap_dhgroups": [ 00:07:20.111 "null", 00:07:20.111 "ffdhe2048", 00:07:20.111 "ffdhe3072", 00:07:20.111 "ffdhe4096", 00:07:20.111 "ffdhe6144", 00:07:20.111 "ffdhe8192" 00:07:20.111 ] 00:07:20.111 } 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "method": "nvmf_set_max_subsystems", 00:07:20.111 "params": { 00:07:20.111 "max_subsystems": 1024 00:07:20.111 } 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "method": "nvmf_set_crdt", 00:07:20.111 "params": { 00:07:20.111 "crdt1": 0, 00:07:20.111 "crdt2": 0, 00:07:20.111 "crdt3": 0 00:07:20.111 } 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "method": "nvmf_create_transport", 00:07:20.111 "params": { 00:07:20.111 "trtype": "TCP", 00:07:20.111 "max_queue_depth": 128, 00:07:20.111 "max_io_qpairs_per_ctrlr": 127, 00:07:20.111 "in_capsule_data_size": 4096, 00:07:20.111 "max_io_size": 131072, 00:07:20.111 "io_unit_size": 131072, 00:07:20.111 "max_aq_depth": 128, 00:07:20.111 "num_shared_buffers": 511, 00:07:20.111 "buf_cache_size": 4294967295, 00:07:20.111 "dif_insert_or_strip": false, 00:07:20.111 "zcopy": false, 00:07:20.111 "c2h_success": true, 00:07:20.111 "sock_priority": 0, 00:07:20.111 "abort_timeout_sec": 1, 00:07:20.111 "ack_timeout": 0, 00:07:20.111 "data_wr_pool_size": 0 00:07:20.111 } 00:07:20.111 } 00:07:20.111 ] 00:07:20.111 }, 00:07:20.111 { 00:07:20.111 "subsystem": "iscsi", 00:07:20.111 "config": [ 00:07:20.111 { 00:07:20.111 "method": "iscsi_set_options", 00:07:20.111 "params": { 00:07:20.111 "node_base": "iqn.2016-06.io.spdk", 00:07:20.111 "max_sessions": 128, 00:07:20.111 "max_connections_per_session": 2, 00:07:20.111 "max_queue_depth": 64, 00:07:20.111 "default_time2wait": 2, 00:07:20.111 "default_time2retain": 20, 00:07:20.111 "first_burst_length": 8192, 00:07:20.111 "immediate_data": true, 00:07:20.111 "allow_duplicated_isid": false, 00:07:20.111 "error_recovery_level": 0, 00:07:20.111 "nop_timeout": 60, 00:07:20.111 "nop_in_interval": 30, 00:07:20.111 "disable_chap": false, 00:07:20.111 "require_chap": false, 00:07:20.111 "mutual_chap": false, 00:07:20.111 "chap_group": 0, 00:07:20.111 "max_large_datain_per_connection": 64, 00:07:20.111 "max_r2t_per_connection": 4, 00:07:20.111 "pdu_pool_size": 36864, 00:07:20.111 "immediate_data_pool_size": 16384, 00:07:20.111 "data_out_pool_size": 2048 00:07:20.111 } 00:07:20.111 } 00:07:20.111 ] 00:07:20.111 } 00:07:20.111 ] 00:07:20.111 } 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58432 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58432 ']' 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58432 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58432 00:07:20.111 killing process with pid 58432 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58432' 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58432 00:07:20.111 12:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58432 00:07:22.645 12:17:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58488 00:07:22.645 12:17:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:22.645 12:17:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58488 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58488 ']' 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58488 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58488 00:07:27.918 killing process with pid 58488 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58488' 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58488 00:07:27.918 12:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58488 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:30.454 ************************************ 00:07:30.454 END TEST skip_rpc_with_json 00:07:30.454 ************************************ 00:07:30.454 00:07:30.454 real 0m11.531s 00:07:30.454 user 0m10.885s 00:07:30.454 sys 0m0.936s 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:30.454 12:17:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:30.454 12:17:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.454 12:17:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.454 12:17:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.454 ************************************ 00:07:30.454 START TEST skip_rpc_with_delay 00:07:30.454 ************************************ 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:30.454 [2024-10-07 12:17:53.352731] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:30.454 [2024-10-07 12:17:53.352861] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.454 00:07:30.454 real 0m0.174s 00:07:30.454 user 0m0.090s 00:07:30.454 sys 0m0.083s 00:07:30.454 ************************************ 00:07:30.454 END TEST skip_rpc_with_delay 00:07:30.454 ************************************ 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.454 12:17:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:30.454 12:17:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:30.454 12:17:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:30.454 12:17:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:30.454 12:17:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.454 12:17:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.454 12:17:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.454 ************************************ 00:07:30.454 START TEST exit_on_failed_rpc_init 00:07:30.454 ************************************ 00:07:30.454 12:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:30.454 12:17:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58616 00:07:30.454 12:17:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.454 12:17:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58616 00:07:30.454 12:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 58616 ']' 00:07:30.454 12:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.454 12:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.454 12:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.454 12:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.454 12:17:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:30.454 [2024-10-07 12:17:53.605317] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:30.454 [2024-10-07 12:17:53.605445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58616 ] 00:07:30.713 [2024-10-07 12:17:53.775144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.713 [2024-10-07 12:17:53.972743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:31.650 12:17:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:31.650 [2024-10-07 12:17:54.908845] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:31.650 [2024-10-07 12:17:54.909173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58644 ] 00:07:31.909 [2024-10-07 12:17:55.077100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.167 [2024-10-07 12:17:55.279502] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.168 [2024-10-07 12:17:55.279821] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:32.168 [2024-10-07 12:17:55.279949] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:32.168 [2024-10-07 12:17:55.279969] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58616 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 58616 ']' 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 58616 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.426 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58616 00:07:32.686 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.686 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.686 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58616' 00:07:32.686 killing process with pid 58616 00:07:32.686 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 58616 00:07:32.686 12:17:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 58616 00:07:35.264 00:07:35.264 real 0m4.712s 00:07:35.264 user 0m5.200s 00:07:35.264 sys 0m0.638s 00:07:35.264 12:17:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.264 ************************************ 00:07:35.264 END TEST exit_on_failed_rpc_init 00:07:35.264 ************************************ 00:07:35.264 12:17:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:35.264 12:17:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:35.264 00:07:35.264 real 0m24.464s 00:07:35.264 user 0m23.423s 00:07:35.264 sys 0m2.392s 00:07:35.264 12:17:58 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.264 ************************************ 00:07:35.264 END TEST skip_rpc 00:07:35.264 ************************************ 00:07:35.264 12:17:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.264 12:17:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:35.264 12:17:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.264 12:17:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.264 12:17:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.264 ************************************ 00:07:35.264 START TEST rpc_client 00:07:35.264 ************************************ 00:07:35.264 12:17:58 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:35.264 * Looking for test storage... 00:07:35.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:35.264 12:17:58 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.264 12:17:58 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.264 12:17:58 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.264 12:17:58 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.264 12:17:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.264 12:17:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.264 12:17:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.264 12:17:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.264 12:17:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.264 12:17:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.264 12:17:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.264 12:17:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.265 12:17:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:35.523 12:17:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.523 12:17:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:35.523 12:17:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:35.523 12:17:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.523 12:17:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:35.523 12:17:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.523 12:17:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.523 12:17:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.523 12:17:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:35.523 12:17:58 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.523 12:17:58 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.523 --rc genhtml_branch_coverage=1 00:07:35.523 --rc genhtml_function_coverage=1 00:07:35.523 --rc genhtml_legend=1 00:07:35.523 --rc geninfo_all_blocks=1 00:07:35.523 --rc geninfo_unexecuted_blocks=1 00:07:35.523 00:07:35.523 ' 00:07:35.523 12:17:58 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.523 --rc genhtml_branch_coverage=1 00:07:35.523 --rc genhtml_function_coverage=1 00:07:35.523 --rc genhtml_legend=1 00:07:35.523 --rc geninfo_all_blocks=1 00:07:35.523 --rc geninfo_unexecuted_blocks=1 00:07:35.523 00:07:35.523 ' 00:07:35.523 12:17:58 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.523 --rc genhtml_branch_coverage=1 00:07:35.523 --rc genhtml_function_coverage=1 00:07:35.523 --rc genhtml_legend=1 00:07:35.523 --rc geninfo_all_blocks=1 00:07:35.523 --rc geninfo_unexecuted_blocks=1 00:07:35.523 00:07:35.523 ' 00:07:35.523 12:17:58 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.523 --rc genhtml_branch_coverage=1 00:07:35.523 --rc genhtml_function_coverage=1 00:07:35.523 --rc genhtml_legend=1 00:07:35.523 --rc geninfo_all_blocks=1 00:07:35.523 --rc geninfo_unexecuted_blocks=1 00:07:35.523 00:07:35.523 ' 00:07:35.523 12:17:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:35.523 OK 00:07:35.523 12:17:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:35.523 00:07:35.523 real 0m0.304s 00:07:35.523 user 0m0.151s 00:07:35.523 sys 0m0.168s 00:07:35.523 12:17:58 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.523 12:17:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:35.523 ************************************ 00:07:35.523 END TEST rpc_client 00:07:35.523 ************************************ 00:07:35.523 12:17:58 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:35.523 12:17:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.523 12:17:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.524 12:17:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.524 ************************************ 00:07:35.524 START TEST json_config 00:07:35.524 ************************************ 00:07:35.524 12:17:58 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:35.524 12:17:58 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.524 12:17:58 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.524 12:17:58 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.783 12:17:58 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.783 12:17:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.783 12:17:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.783 12:17:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.783 12:17:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.783 12:17:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.783 12:17:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.783 12:17:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.783 12:17:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.783 12:17:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.783 12:17:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.783 12:17:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.783 12:17:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:35.783 12:17:58 json_config -- scripts/common.sh@345 -- # : 1 00:07:35.783 12:17:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.783 12:17:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.783 12:17:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:35.783 12:17:58 json_config -- scripts/common.sh@353 -- # local d=1 00:07:35.783 12:17:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.783 12:17:58 json_config -- scripts/common.sh@355 -- # echo 1 00:07:35.783 12:17:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.783 12:17:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:35.783 12:17:58 json_config -- scripts/common.sh@353 -- # local d=2 00:07:35.783 12:17:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.783 12:17:58 json_config -- scripts/common.sh@355 -- # echo 2 00:07:35.783 12:17:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.783 12:17:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.783 12:17:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.783 12:17:58 json_config -- scripts/common.sh@368 -- # return 0 00:07:35.783 12:17:58 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.783 12:17:58 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.783 --rc genhtml_branch_coverage=1 00:07:35.783 --rc genhtml_function_coverage=1 00:07:35.783 --rc genhtml_legend=1 00:07:35.783 --rc geninfo_all_blocks=1 00:07:35.783 --rc geninfo_unexecuted_blocks=1 00:07:35.783 00:07:35.783 ' 00:07:35.783 12:17:58 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.783 --rc genhtml_branch_coverage=1 00:07:35.783 --rc genhtml_function_coverage=1 00:07:35.783 --rc genhtml_legend=1 00:07:35.783 --rc geninfo_all_blocks=1 00:07:35.783 --rc geninfo_unexecuted_blocks=1 00:07:35.783 00:07:35.783 ' 00:07:35.783 12:17:58 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.783 --rc genhtml_branch_coverage=1 00:07:35.783 --rc genhtml_function_coverage=1 00:07:35.783 --rc genhtml_legend=1 00:07:35.783 --rc geninfo_all_blocks=1 00:07:35.783 --rc geninfo_unexecuted_blocks=1 00:07:35.783 00:07:35.783 ' 00:07:35.783 12:17:58 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.783 --rc genhtml_branch_coverage=1 00:07:35.783 --rc genhtml_function_coverage=1 00:07:35.783 --rc genhtml_legend=1 00:07:35.783 --rc geninfo_all_blocks=1 00:07:35.783 --rc geninfo_unexecuted_blocks=1 00:07:35.783 00:07:35.783 ' 00:07:35.783 12:17:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4aa3afb5-543d-48d0-900a-0624ed4cc47b 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4aa3afb5-543d-48d0-900a-0624ed4cc47b 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.783 12:17:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.783 12:17:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.783 12:17:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.783 12:17:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.783 12:17:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.783 12:17:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.783 12:17:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.783 12:17:58 json_config -- paths/export.sh@5 -- # export PATH 00:07:35.783 12:17:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@51 -- # : 0 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.783 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.783 12:17:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.783 12:17:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:35.783 12:17:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:35.783 12:17:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:35.783 12:17:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:35.783 12:17:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:35.783 WARNING: No tests are enabled so not running JSON configuration tests 00:07:35.783 12:17:58 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:35.783 12:17:58 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:35.783 00:07:35.783 real 0m0.226s 00:07:35.783 user 0m0.128s 00:07:35.783 sys 0m0.106s 00:07:35.783 12:17:58 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.783 12:17:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.783 ************************************ 00:07:35.784 END TEST json_config 00:07:35.784 ************************************ 00:07:35.784 12:17:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:35.784 12:17:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.784 12:17:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.784 12:17:59 -- common/autotest_common.sh@10 -- # set +x 00:07:35.784 ************************************ 00:07:35.784 START TEST json_config_extra_key 00:07:35.784 ************************************ 00:07:35.784 12:17:59 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:36.043 12:17:59 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:36.043 12:17:59 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:07:36.043 12:17:59 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:36.043 12:17:59 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:36.043 12:17:59 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.043 12:17:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:36.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.043 --rc genhtml_branch_coverage=1 00:07:36.043 --rc genhtml_function_coverage=1 00:07:36.043 --rc genhtml_legend=1 00:07:36.043 --rc geninfo_all_blocks=1 00:07:36.043 --rc geninfo_unexecuted_blocks=1 00:07:36.043 00:07:36.043 ' 00:07:36.043 12:17:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:36.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.043 --rc genhtml_branch_coverage=1 00:07:36.043 --rc genhtml_function_coverage=1 00:07:36.043 --rc genhtml_legend=1 00:07:36.043 --rc geninfo_all_blocks=1 00:07:36.043 --rc geninfo_unexecuted_blocks=1 00:07:36.043 00:07:36.043 ' 00:07:36.043 12:17:59 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:36.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.043 --rc genhtml_branch_coverage=1 00:07:36.043 --rc genhtml_function_coverage=1 00:07:36.043 --rc genhtml_legend=1 00:07:36.043 --rc geninfo_all_blocks=1 00:07:36.043 --rc geninfo_unexecuted_blocks=1 00:07:36.043 00:07:36.043 ' 00:07:36.043 12:17:59 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:36.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.043 --rc genhtml_branch_coverage=1 00:07:36.043 --rc genhtml_function_coverage=1 00:07:36.043 --rc genhtml_legend=1 00:07:36.043 --rc geninfo_all_blocks=1 00:07:36.043 --rc geninfo_unexecuted_blocks=1 00:07:36.043 00:07:36.043 ' 00:07:36.043 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4aa3afb5-543d-48d0-900a-0624ed4cc47b 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4aa3afb5-543d-48d0-900a-0624ed4cc47b 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.043 12:17:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.043 12:17:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.043 12:17:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.043 12:17:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.044 12:17:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.044 12:17:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:36.044 12:17:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.044 12:17:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:36.044 12:17:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:36.044 12:17:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:36.044 12:17:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.044 12:17:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.044 12:17:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.044 12:17:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:36.044 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:36.044 12:17:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:36.044 12:17:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:36.044 12:17:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:36.044 INFO: launching applications... 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:36.044 12:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58855 00:07:36.044 Waiting for target to run... 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58855 /var/tmp/spdk_tgt.sock 00:07:36.044 12:17:59 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58855 ']' 00:07:36.044 12:17:59 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:36.044 12:17:59 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:36.044 12:17:59 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:36.044 12:17:59 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:36.044 12:17:59 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.044 12:17:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:36.303 [2024-10-07 12:17:59.339279] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:36.303 [2024-10-07 12:17:59.339403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58855 ] 00:07:36.562 [2024-10-07 12:17:59.730650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.820 [2024-10-07 12:17:59.917120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.388 12:18:00 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.388 12:18:00 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:37.388 12:18:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:37.388 00:07:37.388 INFO: shutting down applications... 00:07:37.388 12:18:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:37.388 12:18:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:37.388 12:18:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:37.388 12:18:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:37.388 12:18:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58855 ]] 00:07:37.388 12:18:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58855 00:07:37.388 12:18:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:37.388 12:18:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:37.388 12:18:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58855 00:07:37.388 12:18:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:37.955 12:18:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:37.955 12:18:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:37.955 12:18:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58855 00:07:37.955 12:18:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:38.523 12:18:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:38.523 12:18:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:38.523 12:18:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58855 00:07:38.523 12:18:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:39.091 12:18:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:39.091 12:18:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:39.091 12:18:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58855 00:07:39.091 12:18:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:39.660 12:18:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:39.660 12:18:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:39.660 12:18:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58855 00:07:39.660 12:18:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:39.918 12:18:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:39.918 12:18:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:39.918 12:18:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58855 00:07:39.918 12:18:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:40.486 12:18:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:40.486 12:18:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:40.486 12:18:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58855 00:07:40.486 12:18:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:40.486 12:18:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:40.486 12:18:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:40.486 SPDK target shutdown done 00:07:40.486 12:18:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:40.486 Success 00:07:40.486 12:18:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:40.486 00:07:40.486 real 0m4.674s 00:07:40.486 user 0m4.141s 00:07:40.486 sys 0m0.638s 00:07:40.486 12:18:03 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.486 12:18:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:40.486 ************************************ 00:07:40.486 END TEST json_config_extra_key 00:07:40.486 ************************************ 00:07:40.486 12:18:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:40.486 12:18:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.486 12:18:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.486 12:18:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.486 ************************************ 00:07:40.486 START TEST alias_rpc 00:07:40.486 ************************************ 00:07:40.487 12:18:03 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:40.745 * Looking for test storage... 00:07:40.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:40.745 12:18:03 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:40.745 12:18:03 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:40.745 12:18:03 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:40.745 12:18:03 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:40.745 12:18:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.745 12:18:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.746 12:18:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:40.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.746 --rc genhtml_branch_coverage=1 00:07:40.746 --rc genhtml_function_coverage=1 00:07:40.746 --rc genhtml_legend=1 00:07:40.746 --rc geninfo_all_blocks=1 00:07:40.746 --rc geninfo_unexecuted_blocks=1 00:07:40.746 00:07:40.746 ' 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:40.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.746 --rc genhtml_branch_coverage=1 00:07:40.746 --rc genhtml_function_coverage=1 00:07:40.746 --rc genhtml_legend=1 00:07:40.746 --rc geninfo_all_blocks=1 00:07:40.746 --rc geninfo_unexecuted_blocks=1 00:07:40.746 00:07:40.746 ' 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:40.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.746 --rc genhtml_branch_coverage=1 00:07:40.746 --rc genhtml_function_coverage=1 00:07:40.746 --rc genhtml_legend=1 00:07:40.746 --rc geninfo_all_blocks=1 00:07:40.746 --rc geninfo_unexecuted_blocks=1 00:07:40.746 00:07:40.746 ' 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:40.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.746 --rc genhtml_branch_coverage=1 00:07:40.746 --rc genhtml_function_coverage=1 00:07:40.746 --rc genhtml_legend=1 00:07:40.746 --rc geninfo_all_blocks=1 00:07:40.746 --rc geninfo_unexecuted_blocks=1 00:07:40.746 00:07:40.746 ' 00:07:40.746 12:18:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:40.746 12:18:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58968 00:07:40.746 12:18:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:40.746 12:18:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58968 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58968 ']' 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.746 12:18:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.005 [2024-10-07 12:18:04.094601] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:41.005 [2024-10-07 12:18:04.094739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58968 ] 00:07:41.005 [2024-10-07 12:18:04.265067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.264 [2024-10-07 12:18:04.476181] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.200 12:18:05 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.200 12:18:05 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:42.200 12:18:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:42.460 12:18:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58968 00:07:42.460 12:18:05 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58968 ']' 00:07:42.460 12:18:05 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58968 00:07:42.460 12:18:05 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:42.460 12:18:05 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.460 12:18:05 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58968 00:07:42.460 12:18:05 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.460 12:18:05 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.460 killing process with pid 58968 00:07:42.460 12:18:05 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58968' 00:07:42.460 12:18:05 alias_rpc -- common/autotest_common.sh@969 -- # kill 58968 00:07:42.460 12:18:05 alias_rpc -- common/autotest_common.sh@974 -- # wait 58968 00:07:45.024 00:07:45.024 real 0m4.366s 00:07:45.024 user 0m4.308s 00:07:45.024 sys 0m0.641s 00:07:45.024 12:18:08 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.024 12:18:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.024 ************************************ 00:07:45.024 END TEST alias_rpc 00:07:45.024 ************************************ 00:07:45.024 12:18:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:45.024 12:18:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:45.024 12:18:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:45.024 12:18:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.024 12:18:08 -- common/autotest_common.sh@10 -- # set +x 00:07:45.024 ************************************ 00:07:45.024 START TEST spdkcli_tcp 00:07:45.024 ************************************ 00:07:45.024 12:18:08 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:45.283 * Looking for test storage... 00:07:45.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.283 12:18:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:45.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.283 --rc genhtml_branch_coverage=1 00:07:45.283 --rc genhtml_function_coverage=1 00:07:45.283 --rc genhtml_legend=1 00:07:45.283 --rc geninfo_all_blocks=1 00:07:45.283 --rc geninfo_unexecuted_blocks=1 00:07:45.283 00:07:45.283 ' 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:45.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.283 --rc genhtml_branch_coverage=1 00:07:45.283 --rc genhtml_function_coverage=1 00:07:45.283 --rc genhtml_legend=1 00:07:45.283 --rc geninfo_all_blocks=1 00:07:45.283 --rc geninfo_unexecuted_blocks=1 00:07:45.283 00:07:45.283 ' 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:45.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.283 --rc genhtml_branch_coverage=1 00:07:45.283 --rc genhtml_function_coverage=1 00:07:45.283 --rc genhtml_legend=1 00:07:45.283 --rc geninfo_all_blocks=1 00:07:45.283 --rc geninfo_unexecuted_blocks=1 00:07:45.283 00:07:45.283 ' 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:45.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.283 --rc genhtml_branch_coverage=1 00:07:45.283 --rc genhtml_function_coverage=1 00:07:45.283 --rc genhtml_legend=1 00:07:45.283 --rc geninfo_all_blocks=1 00:07:45.283 --rc geninfo_unexecuted_blocks=1 00:07:45.283 00:07:45.283 ' 00:07:45.283 12:18:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:45.283 12:18:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:45.283 12:18:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:45.283 12:18:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:45.283 12:18:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:45.283 12:18:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:45.283 12:18:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.283 12:18:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59075 00:07:45.283 12:18:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:45.283 12:18:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59075 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59075 ']' 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.283 12:18:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.283 [2024-10-07 12:18:08.550362] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:45.283 [2024-10-07 12:18:08.550487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59075 ] 00:07:45.542 [2024-10-07 12:18:08.723502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:45.799 [2024-10-07 12:18:08.943279] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.799 [2024-10-07 12:18:08.943314] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.734 12:18:09 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.734 12:18:09 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:46.734 12:18:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59097 00:07:46.734 12:18:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:46.734 12:18:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:46.734 [ 00:07:46.734 "bdev_malloc_delete", 00:07:46.734 "bdev_malloc_create", 00:07:46.734 "bdev_null_resize", 00:07:46.734 "bdev_null_delete", 00:07:46.734 "bdev_null_create", 00:07:46.734 "bdev_nvme_cuse_unregister", 00:07:46.734 "bdev_nvme_cuse_register", 00:07:46.734 "bdev_opal_new_user", 00:07:46.734 "bdev_opal_set_lock_state", 00:07:46.734 "bdev_opal_delete", 00:07:46.734 "bdev_opal_get_info", 00:07:46.734 "bdev_opal_create", 00:07:46.734 "bdev_nvme_opal_revert", 00:07:46.734 "bdev_nvme_opal_init", 00:07:46.734 "bdev_nvme_send_cmd", 00:07:46.734 "bdev_nvme_set_keys", 00:07:46.734 "bdev_nvme_get_path_iostat", 00:07:46.734 "bdev_nvme_get_mdns_discovery_info", 00:07:46.734 "bdev_nvme_stop_mdns_discovery", 00:07:46.734 "bdev_nvme_start_mdns_discovery", 00:07:46.734 "bdev_nvme_set_multipath_policy", 00:07:46.734 "bdev_nvme_set_preferred_path", 00:07:46.734 "bdev_nvme_get_io_paths", 00:07:46.734 "bdev_nvme_remove_error_injection", 00:07:46.734 "bdev_nvme_add_error_injection", 00:07:46.734 "bdev_nvme_get_discovery_info", 00:07:46.734 "bdev_nvme_stop_discovery", 00:07:46.734 "bdev_nvme_start_discovery", 00:07:46.734 "bdev_nvme_get_controller_health_info", 00:07:46.734 "bdev_nvme_disable_controller", 00:07:46.734 "bdev_nvme_enable_controller", 00:07:46.734 "bdev_nvme_reset_controller", 00:07:46.734 "bdev_nvme_get_transport_statistics", 00:07:46.734 "bdev_nvme_apply_firmware", 00:07:46.734 "bdev_nvme_detach_controller", 00:07:46.734 "bdev_nvme_get_controllers", 00:07:46.734 "bdev_nvme_attach_controller", 00:07:46.734 "bdev_nvme_set_hotplug", 00:07:46.734 "bdev_nvme_set_options", 00:07:46.734 "bdev_passthru_delete", 00:07:46.734 "bdev_passthru_create", 00:07:46.734 "bdev_lvol_set_parent_bdev", 00:07:46.734 "bdev_lvol_set_parent", 00:07:46.734 "bdev_lvol_check_shallow_copy", 00:07:46.734 "bdev_lvol_start_shallow_copy", 00:07:46.734 "bdev_lvol_grow_lvstore", 00:07:46.734 "bdev_lvol_get_lvols", 00:07:46.734 "bdev_lvol_get_lvstores", 00:07:46.734 "bdev_lvol_delete", 00:07:46.734 "bdev_lvol_set_read_only", 00:07:46.734 "bdev_lvol_resize", 00:07:46.734 "bdev_lvol_decouple_parent", 00:07:46.734 "bdev_lvol_inflate", 00:07:46.734 "bdev_lvol_rename", 00:07:46.734 "bdev_lvol_clone_bdev", 00:07:46.734 "bdev_lvol_clone", 00:07:46.734 "bdev_lvol_snapshot", 00:07:46.734 "bdev_lvol_create", 00:07:46.734 "bdev_lvol_delete_lvstore", 00:07:46.734 "bdev_lvol_rename_lvstore", 00:07:46.734 "bdev_lvol_create_lvstore", 00:07:46.734 "bdev_raid_set_options", 00:07:46.734 "bdev_raid_remove_base_bdev", 00:07:46.734 "bdev_raid_add_base_bdev", 00:07:46.734 "bdev_raid_delete", 00:07:46.734 "bdev_raid_create", 00:07:46.734 "bdev_raid_get_bdevs", 00:07:46.734 "bdev_error_inject_error", 00:07:46.734 "bdev_error_delete", 00:07:46.734 "bdev_error_create", 00:07:46.734 "bdev_split_delete", 00:07:46.734 "bdev_split_create", 00:07:46.734 "bdev_delay_delete", 00:07:46.734 "bdev_delay_create", 00:07:46.734 "bdev_delay_update_latency", 00:07:46.734 "bdev_zone_block_delete", 00:07:46.734 "bdev_zone_block_create", 00:07:46.734 "blobfs_create", 00:07:46.734 "blobfs_detect", 00:07:46.734 "blobfs_set_cache_size", 00:07:46.734 "bdev_xnvme_delete", 00:07:46.734 "bdev_xnvme_create", 00:07:46.734 "bdev_aio_delete", 00:07:46.734 "bdev_aio_rescan", 00:07:46.734 "bdev_aio_create", 00:07:46.734 "bdev_ftl_set_property", 00:07:46.734 "bdev_ftl_get_properties", 00:07:46.734 "bdev_ftl_get_stats", 00:07:46.734 "bdev_ftl_unmap", 00:07:46.734 "bdev_ftl_unload", 00:07:46.734 "bdev_ftl_delete", 00:07:46.734 "bdev_ftl_load", 00:07:46.734 "bdev_ftl_create", 00:07:46.734 "bdev_virtio_attach_controller", 00:07:46.734 "bdev_virtio_scsi_get_devices", 00:07:46.734 "bdev_virtio_detach_controller", 00:07:46.734 "bdev_virtio_blk_set_hotplug", 00:07:46.734 "bdev_iscsi_delete", 00:07:46.734 "bdev_iscsi_create", 00:07:46.734 "bdev_iscsi_set_options", 00:07:46.734 "accel_error_inject_error", 00:07:46.734 "ioat_scan_accel_module", 00:07:46.734 "dsa_scan_accel_module", 00:07:46.734 "iaa_scan_accel_module", 00:07:46.734 "keyring_file_remove_key", 00:07:46.734 "keyring_file_add_key", 00:07:46.734 "keyring_linux_set_options", 00:07:46.734 "fsdev_aio_delete", 00:07:46.734 "fsdev_aio_create", 00:07:46.734 "iscsi_get_histogram", 00:07:46.734 "iscsi_enable_histogram", 00:07:46.734 "iscsi_set_options", 00:07:46.734 "iscsi_get_auth_groups", 00:07:46.734 "iscsi_auth_group_remove_secret", 00:07:46.734 "iscsi_auth_group_add_secret", 00:07:46.734 "iscsi_delete_auth_group", 00:07:46.734 "iscsi_create_auth_group", 00:07:46.734 "iscsi_set_discovery_auth", 00:07:46.734 "iscsi_get_options", 00:07:46.734 "iscsi_target_node_request_logout", 00:07:46.734 "iscsi_target_node_set_redirect", 00:07:46.734 "iscsi_target_node_set_auth", 00:07:46.734 "iscsi_target_node_add_lun", 00:07:46.734 "iscsi_get_stats", 00:07:46.734 "iscsi_get_connections", 00:07:46.734 "iscsi_portal_group_set_auth", 00:07:46.734 "iscsi_start_portal_group", 00:07:46.734 "iscsi_delete_portal_group", 00:07:46.734 "iscsi_create_portal_group", 00:07:46.734 "iscsi_get_portal_groups", 00:07:46.734 "iscsi_delete_target_node", 00:07:46.734 "iscsi_target_node_remove_pg_ig_maps", 00:07:46.734 "iscsi_target_node_add_pg_ig_maps", 00:07:46.734 "iscsi_create_target_node", 00:07:46.734 "iscsi_get_target_nodes", 00:07:46.734 "iscsi_delete_initiator_group", 00:07:46.734 "iscsi_initiator_group_remove_initiators", 00:07:46.734 "iscsi_initiator_group_add_initiators", 00:07:46.734 "iscsi_create_initiator_group", 00:07:46.734 "iscsi_get_initiator_groups", 00:07:46.734 "nvmf_set_crdt", 00:07:46.734 "nvmf_set_config", 00:07:46.734 "nvmf_set_max_subsystems", 00:07:46.734 "nvmf_stop_mdns_prr", 00:07:46.734 "nvmf_publish_mdns_prr", 00:07:46.734 "nvmf_subsystem_get_listeners", 00:07:46.734 "nvmf_subsystem_get_qpairs", 00:07:46.734 "nvmf_subsystem_get_controllers", 00:07:46.734 "nvmf_get_stats", 00:07:46.734 "nvmf_get_transports", 00:07:46.734 "nvmf_create_transport", 00:07:46.734 "nvmf_get_targets", 00:07:46.734 "nvmf_delete_target", 00:07:46.734 "nvmf_create_target", 00:07:46.734 "nvmf_subsystem_allow_any_host", 00:07:46.734 "nvmf_subsystem_set_keys", 00:07:46.734 "nvmf_subsystem_remove_host", 00:07:46.734 "nvmf_subsystem_add_host", 00:07:46.734 "nvmf_ns_remove_host", 00:07:46.734 "nvmf_ns_add_host", 00:07:46.734 "nvmf_subsystem_remove_ns", 00:07:46.734 "nvmf_subsystem_set_ns_ana_group", 00:07:46.734 "nvmf_subsystem_add_ns", 00:07:46.734 "nvmf_subsystem_listener_set_ana_state", 00:07:46.734 "nvmf_discovery_get_referrals", 00:07:46.734 "nvmf_discovery_remove_referral", 00:07:46.734 "nvmf_discovery_add_referral", 00:07:46.734 "nvmf_subsystem_remove_listener", 00:07:46.734 "nvmf_subsystem_add_listener", 00:07:46.734 "nvmf_delete_subsystem", 00:07:46.734 "nvmf_create_subsystem", 00:07:46.734 "nvmf_get_subsystems", 00:07:46.734 "env_dpdk_get_mem_stats", 00:07:46.734 "nbd_get_disks", 00:07:46.734 "nbd_stop_disk", 00:07:46.734 "nbd_start_disk", 00:07:46.734 "ublk_recover_disk", 00:07:46.734 "ublk_get_disks", 00:07:46.734 "ublk_stop_disk", 00:07:46.734 "ublk_start_disk", 00:07:46.734 "ublk_destroy_target", 00:07:46.734 "ublk_create_target", 00:07:46.734 "virtio_blk_create_transport", 00:07:46.734 "virtio_blk_get_transports", 00:07:46.734 "vhost_controller_set_coalescing", 00:07:46.734 "vhost_get_controllers", 00:07:46.734 "vhost_delete_controller", 00:07:46.734 "vhost_create_blk_controller", 00:07:46.734 "vhost_scsi_controller_remove_target", 00:07:46.734 "vhost_scsi_controller_add_target", 00:07:46.734 "vhost_start_scsi_controller", 00:07:46.734 "vhost_create_scsi_controller", 00:07:46.734 "thread_set_cpumask", 00:07:46.735 "scheduler_set_options", 00:07:46.735 "framework_get_governor", 00:07:46.735 "framework_get_scheduler", 00:07:46.735 "framework_set_scheduler", 00:07:46.735 "framework_get_reactors", 00:07:46.735 "thread_get_io_channels", 00:07:46.735 "thread_get_pollers", 00:07:46.735 "thread_get_stats", 00:07:46.735 "framework_monitor_context_switch", 00:07:46.735 "spdk_kill_instance", 00:07:46.735 "log_enable_timestamps", 00:07:46.735 "log_get_flags", 00:07:46.735 "log_clear_flag", 00:07:46.735 "log_set_flag", 00:07:46.735 "log_get_level", 00:07:46.735 "log_set_level", 00:07:46.735 "log_get_print_level", 00:07:46.735 "log_set_print_level", 00:07:46.735 "framework_enable_cpumask_locks", 00:07:46.735 "framework_disable_cpumask_locks", 00:07:46.735 "framework_wait_init", 00:07:46.735 "framework_start_init", 00:07:46.735 "scsi_get_devices", 00:07:46.735 "bdev_get_histogram", 00:07:46.735 "bdev_enable_histogram", 00:07:46.735 "bdev_set_qos_limit", 00:07:46.735 "bdev_set_qd_sampling_period", 00:07:46.735 "bdev_get_bdevs", 00:07:46.735 "bdev_reset_iostat", 00:07:46.735 "bdev_get_iostat", 00:07:46.735 "bdev_examine", 00:07:46.735 "bdev_wait_for_examine", 00:07:46.735 "bdev_set_options", 00:07:46.735 "accel_get_stats", 00:07:46.735 "accel_set_options", 00:07:46.735 "accel_set_driver", 00:07:46.735 "accel_crypto_key_destroy", 00:07:46.735 "accel_crypto_keys_get", 00:07:46.735 "accel_crypto_key_create", 00:07:46.735 "accel_assign_opc", 00:07:46.735 "accel_get_module_info", 00:07:46.735 "accel_get_opc_assignments", 00:07:46.735 "vmd_rescan", 00:07:46.735 "vmd_remove_device", 00:07:46.735 "vmd_enable", 00:07:46.735 "sock_get_default_impl", 00:07:46.735 "sock_set_default_impl", 00:07:46.735 "sock_impl_set_options", 00:07:46.735 "sock_impl_get_options", 00:07:46.735 "iobuf_get_stats", 00:07:46.735 "iobuf_set_options", 00:07:46.735 "keyring_get_keys", 00:07:46.735 "framework_get_pci_devices", 00:07:46.735 "framework_get_config", 00:07:46.735 "framework_get_subsystems", 00:07:46.735 "fsdev_set_opts", 00:07:46.735 "fsdev_get_opts", 00:07:46.735 "trace_get_info", 00:07:46.735 "trace_get_tpoint_group_mask", 00:07:46.735 "trace_disable_tpoint_group", 00:07:46.735 "trace_enable_tpoint_group", 00:07:46.735 "trace_clear_tpoint_mask", 00:07:46.735 "trace_set_tpoint_mask", 00:07:46.735 "notify_get_notifications", 00:07:46.735 "notify_get_types", 00:07:46.735 "spdk_get_version", 00:07:46.735 "rpc_get_methods" 00:07:46.735 ] 00:07:46.735 12:18:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:46.735 12:18:10 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:46.735 12:18:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.993 12:18:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:46.993 12:18:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59075 00:07:46.993 12:18:10 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59075 ']' 00:07:46.993 12:18:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59075 00:07:46.993 12:18:10 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:46.993 12:18:10 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.993 12:18:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59075 00:07:46.993 12:18:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.993 12:18:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.993 killing process with pid 59075 00:07:46.993 12:18:10 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59075' 00:07:46.993 12:18:10 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59075 00:07:46.993 12:18:10 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59075 00:07:49.527 00:07:49.527 real 0m4.428s 00:07:49.527 user 0m7.641s 00:07:49.527 sys 0m0.692s 00:07:49.527 12:18:12 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.527 ************************************ 00:07:49.527 END TEST spdkcli_tcp 00:07:49.527 ************************************ 00:07:49.527 12:18:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 12:18:12 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:49.527 12:18:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.527 12:18:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.527 12:18:12 -- common/autotest_common.sh@10 -- # set +x 00:07:49.527 ************************************ 00:07:49.527 START TEST dpdk_mem_utility 00:07:49.527 ************************************ 00:07:49.527 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:49.786 * Looking for test storage... 00:07:49.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.786 12:18:12 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:49.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.786 --rc genhtml_branch_coverage=1 00:07:49.786 --rc genhtml_function_coverage=1 00:07:49.786 --rc genhtml_legend=1 00:07:49.786 --rc geninfo_all_blocks=1 00:07:49.786 --rc geninfo_unexecuted_blocks=1 00:07:49.786 00:07:49.786 ' 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:49.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.786 --rc genhtml_branch_coverage=1 00:07:49.786 --rc genhtml_function_coverage=1 00:07:49.786 --rc genhtml_legend=1 00:07:49.786 --rc geninfo_all_blocks=1 00:07:49.786 --rc geninfo_unexecuted_blocks=1 00:07:49.786 00:07:49.786 ' 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:49.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.786 --rc genhtml_branch_coverage=1 00:07:49.786 --rc genhtml_function_coverage=1 00:07:49.786 --rc genhtml_legend=1 00:07:49.786 --rc geninfo_all_blocks=1 00:07:49.786 --rc geninfo_unexecuted_blocks=1 00:07:49.786 00:07:49.786 ' 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:49.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.786 --rc genhtml_branch_coverage=1 00:07:49.786 --rc genhtml_function_coverage=1 00:07:49.786 --rc genhtml_legend=1 00:07:49.786 --rc geninfo_all_blocks=1 00:07:49.786 --rc geninfo_unexecuted_blocks=1 00:07:49.786 00:07:49.786 ' 00:07:49.786 12:18:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:49.786 12:18:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59202 00:07:49.786 12:18:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:49.786 12:18:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59202 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59202 ']' 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.786 12:18:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 [2024-10-07 12:18:13.052784] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:49.786 [2024-10-07 12:18:13.052943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59202 ] 00:07:50.045 [2024-10-07 12:18:13.230526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.304 [2024-10-07 12:18:13.434829] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.243 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.243 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:51.243 12:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:51.243 12:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:51.243 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.243 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:51.243 { 00:07:51.243 "filename": "/tmp/spdk_mem_dump.txt" 00:07:51.243 } 00:07:51.243 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.243 12:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:51.243 DPDK memory size 866.000000 MiB in 1 heap(s) 00:07:51.243 1 heaps totaling size 866.000000 MiB 00:07:51.243 size: 866.000000 MiB heap id: 0 00:07:51.243 end heaps---------- 00:07:51.243 9 mempools totaling size 642.649841 MiB 00:07:51.243 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:51.243 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:51.243 size: 92.545471 MiB name: bdev_io_59202 00:07:51.243 size: 51.011292 MiB name: evtpool_59202 00:07:51.243 size: 50.003479 MiB name: msgpool_59202 00:07:51.243 size: 36.509338 MiB name: fsdev_io_59202 00:07:51.243 size: 21.763794 MiB name: PDU_Pool 00:07:51.243 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:51.243 size: 0.026123 MiB name: Session_Pool 00:07:51.243 end mempools------- 00:07:51.243 6 memzones totaling size 4.142822 MiB 00:07:51.243 size: 1.000366 MiB name: RG_ring_0_59202 00:07:51.243 size: 1.000366 MiB name: RG_ring_1_59202 00:07:51.243 size: 1.000366 MiB name: RG_ring_4_59202 00:07:51.243 size: 1.000366 MiB name: RG_ring_5_59202 00:07:51.243 size: 0.125366 MiB name: RG_ring_2_59202 00:07:51.243 size: 0.015991 MiB name: RG_ring_3_59202 00:07:51.243 end memzones------- 00:07:51.244 12:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:51.244 heap id: 0 total size: 866.000000 MiB number of busy elements: 310 number of free elements: 19 00:07:51.244 list of free elements. size: 19.914795 MiB 00:07:51.244 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:51.244 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:51.244 element at address: 0x200009600000 with size: 1.995972 MiB 00:07:51.244 element at address: 0x20000d800000 with size: 1.995972 MiB 00:07:51.244 element at address: 0x200007000000 with size: 1.991028 MiB 00:07:51.244 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:07:51.244 element at address: 0x20001c300040 with size: 0.999939 MiB 00:07:51.244 element at address: 0x20001c400000 with size: 0.999084 MiB 00:07:51.244 element at address: 0x200035000000 with size: 0.994324 MiB 00:07:51.244 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:07:51.244 element at address: 0x20001c700040 with size: 0.936401 MiB 00:07:51.244 element at address: 0x200000200000 with size: 0.831909 MiB 00:07:51.244 element at address: 0x20001de00000 with size: 0.562195 MiB 00:07:51.244 element at address: 0x200003e00000 with size: 0.490417 MiB 00:07:51.244 element at address: 0x20001c000000 with size: 0.489197 MiB 00:07:51.244 element at address: 0x20001c800000 with size: 0.485413 MiB 00:07:51.244 element at address: 0x200015e00000 with size: 0.443481 MiB 00:07:51.244 element at address: 0x20002b200000 with size: 0.390442 MiB 00:07:51.244 element at address: 0x200003a00000 with size: 0.353088 MiB 00:07:51.244 list of standard malloc elements. size: 199.286499 MiB 00:07:51.244 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:07:51.244 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:07:51.244 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:07:51.244 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:07:51.244 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:07:51.244 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:51.244 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:07:51.244 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:51.244 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:07:51.244 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:07:51.244 element at address: 0x200015dff040 with size: 0.000305 MiB 00:07:51.244 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:51.244 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003aff800 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003efef00 with size: 0.000244 MiB 00:07:51.244 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:51.244 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dff180 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dff280 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dff380 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dff480 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dff580 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dff680 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dff780 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dff880 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dff980 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015e71880 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015e71980 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015e72080 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015e72180 with size: 0.000244 MiB 00:07:51.245 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:07:51.245 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b264040 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:07:51.246 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:07:51.246 list of memzone associated elements. size: 646.798706 MiB 00:07:51.246 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:07:51.246 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:51.246 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:07:51.246 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:51.246 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:07:51.246 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59202_0 00:07:51.247 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:51.247 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59202_0 00:07:51.247 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:51.247 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59202_0 00:07:51.247 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:07:51.247 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59202_0 00:07:51.247 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:07:51.247 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:51.247 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:07:51.247 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:51.247 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:51.247 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59202 00:07:51.247 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:51.247 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59202 00:07:51.247 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:51.247 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59202 00:07:51.247 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:07:51.247 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:51.247 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:07:51.247 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:51.247 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:07:51.247 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:51.247 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:07:51.247 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:51.247 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:51.247 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59202 00:07:51.247 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:51.247 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59202 00:07:51.247 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:07:51.247 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59202 00:07:51.247 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:07:51.247 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59202 00:07:51.247 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:07:51.247 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59202 00:07:51.247 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:07:51.247 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59202 00:07:51.247 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:07:51.247 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:51.247 element at address: 0x200015e72280 with size: 0.500549 MiB 00:07:51.247 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:51.247 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:07:51.247 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:51.247 element at address: 0x200003a5e880 with size: 0.125549 MiB 00:07:51.247 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59202 00:07:51.247 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:07:51.247 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:51.247 element at address: 0x20002b264140 with size: 0.023804 MiB 00:07:51.247 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:51.247 element at address: 0x200003a5a640 with size: 0.016174 MiB 00:07:51.247 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59202 00:07:51.247 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:07:51.247 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:51.247 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:07:51.247 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59202 00:07:51.247 element at address: 0x200003aff900 with size: 0.000366 MiB 00:07:51.247 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59202 00:07:51.247 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:07:51.247 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59202 00:07:51.247 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:07:51.247 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:51.247 12:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:51.247 12:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59202 00:07:51.247 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59202 ']' 00:07:51.247 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59202 00:07:51.247 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:51.247 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.247 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59202 00:07:51.247 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.247 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.247 killing process with pid 59202 00:07:51.247 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59202' 00:07:51.247 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59202 00:07:51.247 12:18:14 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59202 00:07:53.795 00:07:53.795 real 0m4.246s 00:07:53.795 user 0m4.097s 00:07:53.795 sys 0m0.622s 00:07:53.795 12:18:16 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.795 12:18:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:53.795 ************************************ 00:07:53.795 END TEST dpdk_mem_utility 00:07:53.795 ************************************ 00:07:53.795 12:18:17 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:53.795 12:18:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.795 12:18:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.795 12:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:53.795 ************************************ 00:07:53.795 START TEST event 00:07:53.795 ************************************ 00:07:53.795 12:18:17 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:54.054 * Looking for test storage... 00:07:54.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:54.054 12:18:17 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:54.054 12:18:17 event -- common/autotest_common.sh@1681 -- # lcov --version 00:07:54.054 12:18:17 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:54.054 12:18:17 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:54.054 12:18:17 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.054 12:18:17 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.054 12:18:17 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.054 12:18:17 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.054 12:18:17 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.054 12:18:17 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.054 12:18:17 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.054 12:18:17 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.054 12:18:17 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.054 12:18:17 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.054 12:18:17 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.054 12:18:17 event -- scripts/common.sh@344 -- # case "$op" in 00:07:54.054 12:18:17 event -- scripts/common.sh@345 -- # : 1 00:07:54.054 12:18:17 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.054 12:18:17 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.054 12:18:17 event -- scripts/common.sh@365 -- # decimal 1 00:07:54.054 12:18:17 event -- scripts/common.sh@353 -- # local d=1 00:07:54.054 12:18:17 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.054 12:18:17 event -- scripts/common.sh@355 -- # echo 1 00:07:54.054 12:18:17 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.054 12:18:17 event -- scripts/common.sh@366 -- # decimal 2 00:07:54.054 12:18:17 event -- scripts/common.sh@353 -- # local d=2 00:07:54.054 12:18:17 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.054 12:18:17 event -- scripts/common.sh@355 -- # echo 2 00:07:54.054 12:18:17 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.054 12:18:17 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.054 12:18:17 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.054 12:18:17 event -- scripts/common.sh@368 -- # return 0 00:07:54.054 12:18:17 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.054 12:18:17 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:54.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.054 --rc genhtml_branch_coverage=1 00:07:54.054 --rc genhtml_function_coverage=1 00:07:54.054 --rc genhtml_legend=1 00:07:54.054 --rc geninfo_all_blocks=1 00:07:54.054 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 12:18:17 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 12:18:17 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 12:18:17 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 12:18:17 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:54.055 12:18:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:54.055 12:18:17 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:54.055 12:18:17 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:54.055 12:18:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.055 12:18:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:54.055 ************************************ 00:07:54.055 START TEST event_perf 00:07:54.055 ************************************ 00:07:54.055 12:18:17 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:54.055 Running I/O for 1 seconds...[2024-10-07 12:18:17.316656] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:54.055 [2024-10-07 12:18:17.316785] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59310 ] 00:07:54.314 [2024-10-07 12:18:17.490052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.573 [2024-10-07 12:18:17.697543] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.573 [2024-10-07 12:18:17.697716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.573 Running I/O for 1 seconds...[2024-10-07 12:18:17.698344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.573 [2024-10-07 12:18:17.698373] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.950 00:07:55.950 lcore 0: 180769 00:07:55.950 lcore 1: 180767 00:07:55.950 lcore 2: 180768 00:07:55.950 lcore 3: 180768 00:07:55.950 done. 00:07:55.950 00:07:55.950 real 0m1.825s 00:07:55.950 user 0m4.559s 00:07:55.950 sys 0m0.145s 00:07:55.950 12:18:19 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.950 12:18:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.950 ************************************ 00:07:55.950 END TEST event_perf 00:07:55.950 ************************************ 00:07:55.950 12:18:19 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:55.950 12:18:19 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:55.950 12:18:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.950 12:18:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.950 ************************************ 00:07:55.950 START TEST event_reactor 00:07:55.950 ************************************ 00:07:55.950 12:18:19 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:55.950 [2024-10-07 12:18:19.208819] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:55.950 [2024-10-07 12:18:19.209401] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59355 ] 00:07:56.209 [2024-10-07 12:18:19.379391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.468 [2024-10-07 12:18:19.580861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.845 test_start 00:07:57.845 oneshot 00:07:57.845 tick 100 00:07:57.845 tick 100 00:07:57.845 tick 250 00:07:57.845 tick 100 00:07:57.845 tick 100 00:07:57.845 tick 100 00:07:57.845 tick 250 00:07:57.845 tick 500 00:07:57.845 tick 100 00:07:57.845 tick 100 00:07:57.845 tick 250 00:07:57.845 tick 100 00:07:57.845 tick 100 00:07:57.845 test_end 00:07:57.845 00:07:57.845 real 0m1.806s 00:07:57.845 user 0m1.575s 00:07:57.845 sys 0m0.121s 00:07:57.845 12:18:20 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.845 12:18:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:57.845 ************************************ 00:07:57.845 END TEST event_reactor 00:07:57.845 ************************************ 00:07:57.845 12:18:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:57.845 12:18:21 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:57.845 12:18:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.845 12:18:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.845 ************************************ 00:07:57.845 START TEST event_reactor_perf 00:07:57.845 ************************************ 00:07:57.845 12:18:21 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:57.845 [2024-10-07 12:18:21.088595] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:57.845 [2024-10-07 12:18:21.088697] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59392 ] 00:07:58.104 [2024-10-07 12:18:21.259895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.363 [2024-10-07 12:18:21.454094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.740 test_start 00:07:59.740 test_end 00:07:59.740 Performance: 391777 events per second 00:07:59.740 00:07:59.740 real 0m1.799s 00:07:59.740 user 0m1.570s 00:07:59.740 sys 0m0.118s 00:07:59.740 12:18:22 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.740 12:18:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.740 ************************************ 00:07:59.740 END TEST event_reactor_perf 00:07:59.740 ************************************ 00:07:59.740 12:18:22 event -- event/event.sh@49 -- # uname -s 00:07:59.741 12:18:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:59.741 12:18:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:59.741 12:18:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.741 12:18:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.741 12:18:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:59.741 ************************************ 00:07:59.741 START TEST event_scheduler 00:07:59.741 ************************************ 00:07:59.741 12:18:22 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:00.000 * Looking for test storage... 00:08:00.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.000 12:18:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:00.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.000 --rc genhtml_branch_coverage=1 00:08:00.000 --rc genhtml_function_coverage=1 00:08:00.000 --rc genhtml_legend=1 00:08:00.000 --rc geninfo_all_blocks=1 00:08:00.000 --rc geninfo_unexecuted_blocks=1 00:08:00.000 00:08:00.000 ' 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:00.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.000 --rc genhtml_branch_coverage=1 00:08:00.000 --rc genhtml_function_coverage=1 00:08:00.000 --rc genhtml_legend=1 00:08:00.000 --rc geninfo_all_blocks=1 00:08:00.000 --rc geninfo_unexecuted_blocks=1 00:08:00.000 00:08:00.000 ' 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:00.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.000 --rc genhtml_branch_coverage=1 00:08:00.000 --rc genhtml_function_coverage=1 00:08:00.000 --rc genhtml_legend=1 00:08:00.000 --rc geninfo_all_blocks=1 00:08:00.000 --rc geninfo_unexecuted_blocks=1 00:08:00.000 00:08:00.000 ' 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:00.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.000 --rc genhtml_branch_coverage=1 00:08:00.000 --rc genhtml_function_coverage=1 00:08:00.000 --rc genhtml_legend=1 00:08:00.000 --rc geninfo_all_blocks=1 00:08:00.000 --rc geninfo_unexecuted_blocks=1 00:08:00.000 00:08:00.000 ' 00:08:00.000 12:18:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:00.000 12:18:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59468 00:08:00.000 12:18:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:00.000 12:18:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.000 12:18:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59468 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59468 ']' 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.000 12:18:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:00.000 [2024-10-07 12:18:23.227214] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:00.000 [2024-10-07 12:18:23.227346] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59468 ] 00:08:00.260 [2024-10-07 12:18:23.399864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.519 [2024-10-07 12:18:23.658676] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.519 [2024-10-07 12:18:23.658851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.519 [2024-10-07 12:18:23.659001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.519 [2024-10-07 12:18:23.659050] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.777 12:18:24 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.777 12:18:24 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:00.777 12:18:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:00.777 12:18:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.777 12:18:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:00.777 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:00.777 POWER: Cannot set governor of lcore 0 to userspace 00:08:00.777 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:00.777 POWER: Cannot set governor of lcore 0 to performance 00:08:00.777 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:00.777 POWER: Cannot set governor of lcore 0 to userspace 00:08:00.777 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:00.777 POWER: Cannot set governor of lcore 0 to userspace 00:08:00.777 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:00.777 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:00.777 POWER: Unable to set Power Management Environment for lcore 0 00:08:00.777 [2024-10-07 12:18:24.056081] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:08:00.777 [2024-10-07 12:18:24.056110] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:08:00.777 [2024-10-07 12:18:24.056123] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:00.778 [2024-10-07 12:18:24.056145] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:00.778 [2024-10-07 12:18:24.056156] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:00.778 [2024-10-07 12:18:24.056180] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:00.778 12:18:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.778 12:18:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:00.778 12:18:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.778 12:18:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 [2024-10-07 12:18:24.456979] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:01.345 12:18:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.345 12:18:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:01.345 12:18:24 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.345 12:18:24 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.345 12:18:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 ************************************ 00:08:01.345 START TEST scheduler_create_thread 00:08:01.345 ************************************ 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 2 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 3 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 4 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 5 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 6 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 7 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 8 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.345 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.912 9 00:08:01.912 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.912 12:18:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:01.912 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.912 12:18:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.290 10 00:08:03.290 12:18:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.290 12:18:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:03.290 12:18:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.291 12:18:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.859 12:18:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.859 12:18:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:03.859 12:18:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:03.859 12:18:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.859 12:18:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.427 12:18:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.427 12:18:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:04.427 12:18:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.427 12:18:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.363 12:18:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.363 12:18:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:05.363 12:18:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:05.363 12:18:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.363 12:18:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.932 ************************************ 00:08:05.932 END TEST scheduler_create_thread 00:08:05.932 ************************************ 00:08:05.932 12:18:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.932 00:08:05.932 real 0m4.553s 00:08:05.932 user 0m0.026s 00:08:05.932 sys 0m0.010s 00:08:05.932 12:18:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.932 12:18:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.932 12:18:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:05.932 12:18:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59468 00:08:05.932 12:18:29 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59468 ']' 00:08:05.932 12:18:29 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59468 00:08:05.932 12:18:29 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:05.932 12:18:29 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.932 12:18:29 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59468 00:08:05.932 killing process with pid 59468 00:08:05.932 12:18:29 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:05.932 12:18:29 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:05.932 12:18:29 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59468' 00:08:05.932 12:18:29 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59468 00:08:05.932 12:18:29 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59468 00:08:05.932 [2024-10-07 12:18:29.206168] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:07.311 00:08:07.311 real 0m7.649s 00:08:07.311 user 0m17.902s 00:08:07.311 sys 0m0.622s 00:08:07.311 12:18:30 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.311 ************************************ 00:08:07.311 END TEST event_scheduler 00:08:07.311 ************************************ 00:08:07.311 12:18:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:07.570 12:18:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:07.570 12:18:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:07.570 12:18:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.570 12:18:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.570 12:18:30 event -- common/autotest_common.sh@10 -- # set +x 00:08:07.570 ************************************ 00:08:07.570 START TEST app_repeat 00:08:07.570 ************************************ 00:08:07.570 12:18:30 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59600 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:07.570 Process app_repeat pid: 59600 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59600' 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:07.570 spdk_app_start Round 0 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:07.570 12:18:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59600 /var/tmp/spdk-nbd.sock 00:08:07.570 12:18:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59600 ']' 00:08:07.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:07.570 12:18:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:07.570 12:18:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.570 12:18:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:07.570 12:18:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.570 12:18:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:07.570 [2024-10-07 12:18:30.704510] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:07.570 [2024-10-07 12:18:30.704630] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59600 ] 00:08:07.570 [2024-10-07 12:18:30.861266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.139 [2024-10-07 12:18:31.127718] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.139 [2024-10-07 12:18:31.127742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.398 12:18:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.398 12:18:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:08.398 12:18:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:08.656 Malloc0 00:08:08.656 12:18:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:08.916 Malloc1 00:08:08.916 12:18:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.916 12:18:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:09.175 /dev/nbd0 00:08:09.175 12:18:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:09.175 12:18:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:09.175 1+0 records in 00:08:09.175 1+0 records out 00:08:09.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353174 s, 11.6 MB/s 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.175 12:18:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:09.175 12:18:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.175 12:18:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:09.175 12:18:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:09.434 /dev/nbd1 00:08:09.434 12:18:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:09.434 12:18:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:09.434 1+0 records in 00:08:09.434 1+0 records out 00:08:09.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407117 s, 10.1 MB/s 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.434 12:18:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:09.434 12:18:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.434 12:18:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:09.434 12:18:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.434 12:18:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.435 12:18:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:09.694 { 00:08:09.694 "nbd_device": "/dev/nbd0", 00:08:09.694 "bdev_name": "Malloc0" 00:08:09.694 }, 00:08:09.694 { 00:08:09.694 "nbd_device": "/dev/nbd1", 00:08:09.694 "bdev_name": "Malloc1" 00:08:09.694 } 00:08:09.694 ]' 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:09.694 { 00:08:09.694 "nbd_device": "/dev/nbd0", 00:08:09.694 "bdev_name": "Malloc0" 00:08:09.694 }, 00:08:09.694 { 00:08:09.694 "nbd_device": "/dev/nbd1", 00:08:09.694 "bdev_name": "Malloc1" 00:08:09.694 } 00:08:09.694 ]' 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:09.694 /dev/nbd1' 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:09.694 /dev/nbd1' 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:09.694 256+0 records in 00:08:09.694 256+0 records out 00:08:09.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121331 s, 86.4 MB/s 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:09.694 256+0 records in 00:08:09.694 256+0 records out 00:08:09.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244996 s, 42.8 MB/s 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.694 12:18:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:09.955 256+0 records in 00:08:09.955 256+0 records out 00:08:09.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032644 s, 32.1 MB/s 00:08:09.955 12:18:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:09.955 12:18:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.955 12:18:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.955 12:18:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:09.955 12:18:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:09.955 12:18:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:09.955 12:18:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:09.955 12:18:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.955 12:18:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.955 12:18:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.232 12:18:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:10.502 12:18:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:10.502 12:18:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:11.070 12:18:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:12.449 [2024-10-07 12:18:35.404600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:12.449 [2024-10-07 12:18:35.597175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.449 [2024-10-07 12:18:35.597176] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.708 [2024-10-07 12:18:35.785353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:12.708 [2024-10-07 12:18:35.785443] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:14.085 spdk_app_start Round 1 00:08:14.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:14.085 12:18:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:14.085 12:18:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:14.085 12:18:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59600 /var/tmp/spdk-nbd.sock 00:08:14.085 12:18:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59600 ']' 00:08:14.085 12:18:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:14.086 12:18:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.086 12:18:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:14.086 12:18:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.086 12:18:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:14.086 12:18:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.086 12:18:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:14.086 12:18:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:14.344 Malloc0 00:08:14.345 12:18:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:14.603 Malloc1 00:08:14.863 12:18:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:14.863 12:18:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:14.863 /dev/nbd0 00:08:14.863 12:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:14.863 12:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:14.863 12:18:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:14.863 12:18:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:14.863 12:18:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:14.863 12:18:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:14.863 12:18:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:14.863 12:18:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:14.863 12:18:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:14.863 12:18:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:14.863 12:18:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:14.863 1+0 records in 00:08:14.863 1+0 records out 00:08:14.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033121 s, 12.4 MB/s 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:15.122 12:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:15.122 12:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:15.122 12:18:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:15.122 /dev/nbd1 00:08:15.122 12:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:15.122 12:18:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:15.122 1+0 records in 00:08:15.122 1+0 records out 00:08:15.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398129 s, 10.3 MB/s 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:15.122 12:18:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:15.123 12:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:15.123 12:18:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:15.123 12:18:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:15.123 12:18:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.380 12:18:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:15.380 12:18:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:15.380 { 00:08:15.380 "nbd_device": "/dev/nbd0", 00:08:15.380 "bdev_name": "Malloc0" 00:08:15.380 }, 00:08:15.380 { 00:08:15.380 "nbd_device": "/dev/nbd1", 00:08:15.380 "bdev_name": "Malloc1" 00:08:15.380 } 00:08:15.380 ]' 00:08:15.380 12:18:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:15.380 { 00:08:15.380 "nbd_device": "/dev/nbd0", 00:08:15.380 "bdev_name": "Malloc0" 00:08:15.380 }, 00:08:15.380 { 00:08:15.380 "nbd_device": "/dev/nbd1", 00:08:15.380 "bdev_name": "Malloc1" 00:08:15.380 } 00:08:15.380 ]' 00:08:15.380 12:18:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:15.639 /dev/nbd1' 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:15.639 /dev/nbd1' 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:15.639 256+0 records in 00:08:15.639 256+0 records out 00:08:15.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118027 s, 88.8 MB/s 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:15.639 256+0 records in 00:08:15.639 256+0 records out 00:08:15.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276296 s, 38.0 MB/s 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:15.639 256+0 records in 00:08:15.639 256+0 records out 00:08:15.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349372 s, 30.0 MB/s 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:15.639 12:18:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.640 12:18:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.899 12:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.899 12:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.899 12:18:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.899 12:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.899 12:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.899 12:18:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.899 12:18:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:15.899 12:18:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.899 12:18:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.899 12:18:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.158 12:18:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:16.417 12:18:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:16.417 12:18:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:16.676 12:18:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:18.583 [2024-10-07 12:18:41.377978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:18.583 [2024-10-07 12:18:41.633033] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.583 [2024-10-07 12:18:41.633052] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.583 [2024-10-07 12:18:41.858952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:18.583 [2024-10-07 12:18:41.859048] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:19.961 spdk_app_start Round 2 00:08:19.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:19.961 12:18:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:19.961 12:18:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:19.961 12:18:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59600 /var/tmp/spdk-nbd.sock 00:08:19.961 12:18:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59600 ']' 00:08:19.961 12:18:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:19.961 12:18:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.961 12:18:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:19.961 12:18:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.961 12:18:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:19.961 12:18:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.961 12:18:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:19.961 12:18:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:20.282 Malloc0 00:08:20.283 12:18:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:20.541 Malloc1 00:08:20.541 12:18:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:20.541 12:18:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:20.800 /dev/nbd0 00:08:20.800 12:18:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:20.800 12:18:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:20.800 1+0 records in 00:08:20.800 1+0 records out 00:08:20.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287408 s, 14.3 MB/s 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:20.800 12:18:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:20.800 12:18:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:20.800 12:18:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:20.800 12:18:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:21.058 /dev/nbd1 00:08:21.059 12:18:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:21.059 12:18:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:21.059 1+0 records in 00:08:21.059 1+0 records out 00:08:21.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319665 s, 12.8 MB/s 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:21.059 12:18:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:21.059 12:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:21.059 12:18:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:21.059 12:18:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:21.059 12:18:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.059 12:18:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:21.317 12:18:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:21.317 { 00:08:21.317 "nbd_device": "/dev/nbd0", 00:08:21.317 "bdev_name": "Malloc0" 00:08:21.317 }, 00:08:21.317 { 00:08:21.317 "nbd_device": "/dev/nbd1", 00:08:21.317 "bdev_name": "Malloc1" 00:08:21.317 } 00:08:21.317 ]' 00:08:21.317 12:18:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:21.317 { 00:08:21.317 "nbd_device": "/dev/nbd0", 00:08:21.317 "bdev_name": "Malloc0" 00:08:21.317 }, 00:08:21.318 { 00:08:21.318 "nbd_device": "/dev/nbd1", 00:08:21.318 "bdev_name": "Malloc1" 00:08:21.318 } 00:08:21.318 ]' 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:21.318 /dev/nbd1' 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:21.318 /dev/nbd1' 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:21.318 256+0 records in 00:08:21.318 256+0 records out 00:08:21.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121186 s, 86.5 MB/s 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:21.318 256+0 records in 00:08:21.318 256+0 records out 00:08:21.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320526 s, 32.7 MB/s 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:21.318 256+0 records in 00:08:21.318 256+0 records out 00:08:21.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032597 s, 32.2 MB/s 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:21.318 12:18:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:21.576 12:18:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:21.576 12:18:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:21.576 12:18:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:21.576 12:18:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.576 12:18:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.576 12:18:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:21.576 12:18:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:21.576 12:18:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.576 12:18:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:21.576 12:18:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.835 12:18:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:22.093 12:18:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:22.093 12:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:22.093 12:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:22.093 12:18:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:22.094 12:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:22.094 12:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:22.094 12:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:22.094 12:18:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:22.094 12:18:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:22.094 12:18:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:22.094 12:18:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:22.094 12:18:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:22.094 12:18:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:22.661 12:18:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:24.038 [2024-10-07 12:18:47.155258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.298 [2024-10-07 12:18:47.386751] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.298 [2024-10-07 12:18:47.386759] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.556 [2024-10-07 12:18:47.616351] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:24.556 [2024-10-07 12:18:47.616422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:25.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:25.492 12:18:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59600 /var/tmp/spdk-nbd.sock 00:08:25.492 12:18:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59600 ']' 00:08:25.492 12:18:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:25.492 12:18:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.492 12:18:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:25.492 12:18:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.492 12:18:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:25.752 12:18:48 event.app_repeat -- event/event.sh@39 -- # killprocess 59600 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 59600 ']' 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 59600 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59600 00:08:25.752 killing process with pid 59600 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59600' 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@969 -- # kill 59600 00:08:25.752 12:18:48 event.app_repeat -- common/autotest_common.sh@974 -- # wait 59600 00:08:27.129 spdk_app_start is called in Round 0. 00:08:27.129 Shutdown signal received, stop current app iteration 00:08:27.129 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:08:27.129 spdk_app_start is called in Round 1. 00:08:27.129 Shutdown signal received, stop current app iteration 00:08:27.129 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:08:27.129 spdk_app_start is called in Round 2. 00:08:27.129 Shutdown signal received, stop current app iteration 00:08:27.129 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:08:27.129 spdk_app_start is called in Round 3. 00:08:27.129 Shutdown signal received, stop current app iteration 00:08:27.129 12:18:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:27.129 12:18:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:27.129 00:08:27.129 real 0m19.652s 00:08:27.129 user 0m40.031s 00:08:27.129 sys 0m3.404s 00:08:27.129 12:18:50 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.129 ************************************ 00:08:27.129 END TEST app_repeat 00:08:27.129 ************************************ 00:08:27.129 12:18:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:27.129 12:18:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:27.129 12:18:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:27.129 12:18:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.129 12:18:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.129 12:18:50 event -- common/autotest_common.sh@10 -- # set +x 00:08:27.129 ************************************ 00:08:27.129 START TEST cpu_locks 00:08:27.129 ************************************ 00:08:27.130 12:18:50 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:27.389 * Looking for test storage... 00:08:27.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.389 12:18:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:27.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.389 --rc genhtml_branch_coverage=1 00:08:27.389 --rc genhtml_function_coverage=1 00:08:27.389 --rc genhtml_legend=1 00:08:27.389 --rc geninfo_all_blocks=1 00:08:27.389 --rc geninfo_unexecuted_blocks=1 00:08:27.389 00:08:27.389 ' 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:27.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.389 --rc genhtml_branch_coverage=1 00:08:27.389 --rc genhtml_function_coverage=1 00:08:27.389 --rc genhtml_legend=1 00:08:27.389 --rc geninfo_all_blocks=1 00:08:27.389 --rc geninfo_unexecuted_blocks=1 00:08:27.389 00:08:27.389 ' 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:27.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.389 --rc genhtml_branch_coverage=1 00:08:27.389 --rc genhtml_function_coverage=1 00:08:27.389 --rc genhtml_legend=1 00:08:27.389 --rc geninfo_all_blocks=1 00:08:27.389 --rc geninfo_unexecuted_blocks=1 00:08:27.389 00:08:27.389 ' 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:27.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.389 --rc genhtml_branch_coverage=1 00:08:27.389 --rc genhtml_function_coverage=1 00:08:27.389 --rc genhtml_legend=1 00:08:27.389 --rc geninfo_all_blocks=1 00:08:27.389 --rc geninfo_unexecuted_blocks=1 00:08:27.389 00:08:27.389 ' 00:08:27.389 12:18:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:27.389 12:18:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:27.389 12:18:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:27.389 12:18:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.389 12:18:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:27.389 ************************************ 00:08:27.389 START TEST default_locks 00:08:27.389 ************************************ 00:08:27.389 12:18:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:27.389 12:18:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60049 00:08:27.389 12:18:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:27.389 12:18:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60049 00:08:27.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.389 12:18:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60049 ']' 00:08:27.389 12:18:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.389 12:18:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.389 12:18:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.389 12:18:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.389 12:18:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:27.649 [2024-10-07 12:18:50.719073] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:27.649 [2024-10-07 12:18:50.719224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60049 ] 00:08:27.649 [2024-10-07 12:18:50.890524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.908 [2024-10-07 12:18:51.135290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.285 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.285 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:29.285 12:18:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60049 00:08:29.286 12:18:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60049 00:08:29.286 12:18:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60049 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60049 ']' 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60049 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60049 00:08:29.557 killing process with pid 60049 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60049' 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60049 00:08:29.557 12:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60049 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60049 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60049 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60049 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60049 ']' 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.863 ERROR: process (pid: 60049) is no longer running 00:08:32.863 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:32.864 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60049) - No such process 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:32.864 00:08:32.864 real 0m4.919s 00:08:32.864 user 0m4.665s 00:08:32.864 sys 0m0.907s 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.864 ************************************ 00:08:32.864 END TEST default_locks 00:08:32.864 ************************************ 00:08:32.864 12:18:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:32.864 12:18:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:32.864 12:18:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:32.864 12:18:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.864 12:18:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:32.864 ************************************ 00:08:32.864 START TEST default_locks_via_rpc 00:08:32.864 ************************************ 00:08:32.864 12:18:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:32.864 12:18:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60137 00:08:32.864 12:18:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60137 00:08:32.864 12:18:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:32.864 12:18:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60137 ']' 00:08:32.864 12:18:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.864 12:18:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.864 12:18:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.864 12:18:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.864 12:18:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.864 [2024-10-07 12:18:55.710454] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:32.864 [2024-10-07 12:18:55.710588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60137 ] 00:08:32.864 [2024-10-07 12:18:55.882708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.864 [2024-10-07 12:18:56.089519] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.801 12:18:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60137 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60137 00:08:33.802 12:18:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60137 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60137 ']' 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60137 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60137 00:08:34.370 killing process with pid 60137 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60137' 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60137 00:08:34.370 12:18:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60137 00:08:36.906 ************************************ 00:08:36.906 END TEST default_locks_via_rpc 00:08:36.906 ************************************ 00:08:36.906 00:08:36.906 real 0m4.341s 00:08:36.906 user 0m4.242s 00:08:36.906 sys 0m0.735s 00:08:36.906 12:18:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.906 12:18:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.906 12:19:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:36.906 12:19:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.906 12:19:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.906 12:19:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:36.906 ************************************ 00:08:36.906 START TEST non_locking_app_on_locked_coremask 00:08:36.906 ************************************ 00:08:36.906 12:19:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:36.906 12:19:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60212 00:08:36.906 12:19:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60212 /var/tmp/spdk.sock 00:08:36.906 12:19:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:36.906 12:19:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60212 ']' 00:08:36.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.906 12:19:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.906 12:19:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.906 12:19:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.906 12:19:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.906 12:19:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:36.906 [2024-10-07 12:19:00.131648] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:36.906 [2024-10-07 12:19:00.131790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60212 ] 00:08:37.165 [2024-10-07 12:19:00.302127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.424 [2024-10-07 12:19:00.517127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60228 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60228 /var/tmp/spdk2.sock 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60228 ']' 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:38.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.360 12:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:38.360 [2024-10-07 12:19:01.501000] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:38.360 [2024-10-07 12:19:01.501793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60228 ] 00:08:38.619 [2024-10-07 12:19:01.668310] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:38.619 [2024-10-07 12:19:01.668399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.878 [2024-10-07 12:19:02.108634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.786 12:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.786 12:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:40.786 12:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60212 00:08:40.786 12:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60212 00:08:40.786 12:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60212 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60212 ']' 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60212 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60212 00:08:42.165 killing process with pid 60212 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60212' 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60212 00:08:42.165 12:19:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60212 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60228 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60228 ']' 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60228 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60228 00:08:47.439 killing process with pid 60228 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60228' 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60228 00:08:47.439 12:19:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60228 00:08:49.348 00:08:49.348 real 0m12.594s 00:08:49.348 user 0m12.853s 00:08:49.348 sys 0m1.570s 00:08:49.348 12:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.348 12:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.348 ************************************ 00:08:49.348 END TEST non_locking_app_on_locked_coremask 00:08:49.348 ************************************ 00:08:49.608 12:19:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:49.608 12:19:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:49.608 12:19:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.608 12:19:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.608 ************************************ 00:08:49.608 START TEST locking_app_on_unlocked_coremask 00:08:49.608 ************************************ 00:08:49.608 12:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:49.608 12:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60389 00:08:49.608 12:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60389 /var/tmp/spdk.sock 00:08:49.608 12:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:49.608 12:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60389 ']' 00:08:49.608 12:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.608 12:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.608 12:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.608 12:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.608 12:19:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.608 [2024-10-07 12:19:12.795715] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:49.608 [2024-10-07 12:19:12.796134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60389 ] 00:08:49.867 [2024-10-07 12:19:12.965118] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:49.867 [2024-10-07 12:19:12.965470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.133 [2024-10-07 12:19:13.171534] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:51.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60405 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60405 /var/tmp/spdk2.sock 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60405 ']' 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.073 12:19:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.073 [2024-10-07 12:19:14.159315] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:51.073 [2024-10-07 12:19:14.159612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60405 ] 00:08:51.073 [2024-10-07 12:19:14.324928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.641 [2024-10-07 12:19:14.753461] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.548 12:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.548 12:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:53.548 12:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60405 00:08:53.548 12:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60405 00:08:53.548 12:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60389 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60389 ']' 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60389 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60389 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:54.487 killing process with pid 60389 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60389' 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60389 00:08:54.487 12:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60389 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60405 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60405 ']' 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60405 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60405 00:08:59.760 killing process with pid 60405 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60405' 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60405 00:08:59.760 12:19:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60405 00:09:02.294 00:09:02.294 real 0m12.452s 00:09:02.294 user 0m12.708s 00:09:02.294 sys 0m1.483s 00:09:02.294 ************************************ 00:09:02.294 END TEST locking_app_on_unlocked_coremask 00:09:02.294 ************************************ 00:09:02.294 12:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.294 12:19:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:02.294 12:19:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:02.294 12:19:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.294 12:19:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.294 12:19:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:02.294 ************************************ 00:09:02.294 START TEST locking_app_on_locked_coremask 00:09:02.294 ************************************ 00:09:02.294 12:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:09:02.294 12:19:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60566 00:09:02.294 12:19:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60566 /var/tmp/spdk.sock 00:09:02.294 12:19:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:02.294 12:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60566 ']' 00:09:02.294 12:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.294 12:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.295 12:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.295 12:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.295 12:19:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:02.295 [2024-10-07 12:19:25.320295] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:02.295 [2024-10-07 12:19:25.320453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60566 ] 00:09:02.295 [2024-10-07 12:19:25.489155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.553 [2024-10-07 12:19:25.690331] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60587 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60587 /var/tmp/spdk2.sock 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60587 /var/tmp/spdk2.sock 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60587 /var/tmp/spdk2.sock 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60587 ']' 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:03.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.491 12:19:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:03.491 [2024-10-07 12:19:26.647762] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:03.491 [2024-10-07 12:19:26.648354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60587 ] 00:09:03.750 [2024-10-07 12:19:26.815278] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60566 has claimed it. 00:09:03.750 [2024-10-07 12:19:26.815357] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:04.009 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60587) - No such process 00:09:04.009 ERROR: process (pid: 60587) is no longer running 00:09:04.009 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.009 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:04.009 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:04.009 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:04.009 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:04.009 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:04.009 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60566 00:09:04.009 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60566 00:09:04.009 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60566 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60566 ']' 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60566 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60566 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.577 killing process with pid 60566 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60566' 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60566 00:09:04.577 12:19:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60566 00:09:07.112 00:09:07.112 real 0m5.095s 00:09:07.112 user 0m5.245s 00:09:07.112 sys 0m0.893s 00:09:07.112 12:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.112 12:19:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:07.112 ************************************ 00:09:07.112 END TEST locking_app_on_locked_coremask 00:09:07.112 ************************************ 00:09:07.112 12:19:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:07.112 12:19:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.112 12:19:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.112 12:19:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:07.112 ************************************ 00:09:07.112 START TEST locking_overlapped_coremask 00:09:07.112 ************************************ 00:09:07.112 12:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:09:07.112 12:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60657 00:09:07.112 12:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60657 /var/tmp/spdk.sock 00:09:07.112 12:19:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:07.112 12:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60657 ']' 00:09:07.112 12:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.112 12:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.112 12:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.112 12:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.112 12:19:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:07.374 [2024-10-07 12:19:30.489204] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:07.374 [2024-10-07 12:19:30.489330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60657 ] 00:09:07.374 [2024-10-07 12:19:30.656054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:07.633 [2024-10-07 12:19:30.867491] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.633 [2024-10-07 12:19:30.867637] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.633 [2024-10-07 12:19:30.867667] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60681 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60681 /var/tmp/spdk2.sock 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60681 /var/tmp/spdk2.sock 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60681 /var/tmp/spdk2.sock 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60681 ']' 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.570 12:19:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.570 [2024-10-07 12:19:31.847106] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:08.571 [2024-10-07 12:19:31.847250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60681 ] 00:09:08.830 [2024-10-07 12:19:32.014980] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60657 has claimed it. 00:09:08.830 [2024-10-07 12:19:32.015044] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:09.429 ERROR: process (pid: 60681) is no longer running 00:09:09.429 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60681) - No such process 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60657 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60657 ']' 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60657 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60657 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.429 killing process with pid 60657 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60657' 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60657 00:09:09.429 12:19:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60657 00:09:11.967 00:09:11.967 real 0m4.666s 00:09:11.967 user 0m12.202s 00:09:11.967 sys 0m0.654s 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.967 ************************************ 00:09:11.967 END TEST locking_overlapped_coremask 00:09:11.967 ************************************ 00:09:11.967 12:19:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:11.967 12:19:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.967 12:19:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.967 12:19:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:11.967 ************************************ 00:09:11.967 START TEST locking_overlapped_coremask_via_rpc 00:09:11.967 ************************************ 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60745 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60745 /var/tmp/spdk.sock 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60745 ']' 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.967 12:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.967 [2024-10-07 12:19:35.230630] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:11.967 [2024-10-07 12:19:35.230753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60745 ] 00:09:12.226 [2024-10-07 12:19:35.403604] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:12.226 [2024-10-07 12:19:35.403664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.484 [2024-10-07 12:19:35.618732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.484 [2024-10-07 12:19:35.618878] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.484 [2024-10-07 12:19:35.618940] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60763 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60763 /var/tmp/spdk2.sock 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60763 ']' 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:13.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.422 12:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.422 [2024-10-07 12:19:36.606718] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:13.422 [2024-10-07 12:19:36.607037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60763 ] 00:09:13.681 [2024-10-07 12:19:36.773887] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:13.681 [2024-10-07 12:19:36.773946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:13.940 [2024-10-07 12:19:37.202087] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.940 [2024-10-07 12:19:37.202234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.940 [2024-10-07 12:19:37.202267] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.476 [2024-10-07 12:19:39.171141] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60745 has claimed it. 00:09:16.476 request: 00:09:16.476 { 00:09:16.476 "method": "framework_enable_cpumask_locks", 00:09:16.476 "req_id": 1 00:09:16.476 } 00:09:16.476 Got JSON-RPC error response 00:09:16.476 response: 00:09:16.476 { 00:09:16.476 "code": -32603, 00:09:16.476 "message": "Failed to claim CPU core: 2" 00:09:16.476 } 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60745 /var/tmp/spdk.sock 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60745 ']' 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60763 /var/tmp/spdk2.sock 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60763 ']' 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:16.476 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.477 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.477 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.477 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:16.477 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:16.477 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:16.477 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:16.477 ************************************ 00:09:16.477 END TEST locking_overlapped_coremask_via_rpc 00:09:16.477 ************************************ 00:09:16.477 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:16.477 00:09:16.477 real 0m4.495s 00:09:16.477 user 0m1.259s 00:09:16.477 sys 0m0.214s 00:09:16.477 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.477 12:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.477 12:19:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:16.477 12:19:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60745 ]] 00:09:16.477 12:19:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60745 00:09:16.477 12:19:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60745 ']' 00:09:16.477 12:19:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60745 00:09:16.477 12:19:39 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:16.477 12:19:39 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.477 12:19:39 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60745 00:09:16.477 killing process with pid 60745 00:09:16.477 12:19:39 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.477 12:19:39 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.477 12:19:39 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60745' 00:09:16.477 12:19:39 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60745 00:09:16.477 12:19:39 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60745 00:09:19.013 12:19:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60763 ]] 00:09:19.013 12:19:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60763 00:09:19.013 12:19:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60763 ']' 00:09:19.013 12:19:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60763 00:09:19.013 12:19:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:19.013 12:19:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.013 12:19:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60763 00:09:19.272 killing process with pid 60763 00:09:19.272 12:19:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:19.272 12:19:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:19.272 12:19:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60763' 00:09:19.272 12:19:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60763 00:09:19.272 12:19:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60763 00:09:22.594 12:19:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:22.594 12:19:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:22.594 12:19:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60745 ]] 00:09:22.594 12:19:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60745 00:09:22.594 Process with pid 60745 is not found 00:09:22.594 Process with pid 60763 is not found 00:09:22.594 12:19:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60745 ']' 00:09:22.594 12:19:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60745 00:09:22.594 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60745) - No such process 00:09:22.594 12:19:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60745 is not found' 00:09:22.594 12:19:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60763 ]] 00:09:22.594 12:19:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60763 00:09:22.594 12:19:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60763 ']' 00:09:22.594 12:19:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60763 00:09:22.594 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60763) - No such process 00:09:22.594 12:19:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60763 is not found' 00:09:22.594 12:19:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:22.594 00:09:22.594 real 0m54.827s 00:09:22.594 user 1m30.418s 00:09:22.594 sys 0m7.847s 00:09:22.594 12:19:45 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.594 12:19:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.594 ************************************ 00:09:22.594 END TEST cpu_locks 00:09:22.594 ************************************ 00:09:22.594 00:09:22.594 real 1m28.222s 00:09:22.594 user 2m36.309s 00:09:22.594 sys 0m12.676s 00:09:22.594 12:19:45 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.594 12:19:45 event -- common/autotest_common.sh@10 -- # set +x 00:09:22.594 ************************************ 00:09:22.594 END TEST event 00:09:22.594 ************************************ 00:09:22.594 12:19:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:22.594 12:19:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.594 12:19:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.594 12:19:45 -- common/autotest_common.sh@10 -- # set +x 00:09:22.594 ************************************ 00:09:22.594 START TEST thread 00:09:22.594 ************************************ 00:09:22.594 12:19:45 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:22.594 * Looking for test storage... 00:09:22.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:22.594 12:19:45 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:22.594 12:19:45 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:09:22.594 12:19:45 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:22.594 12:19:45 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:22.594 12:19:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.594 12:19:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.594 12:19:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.594 12:19:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.594 12:19:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.594 12:19:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.594 12:19:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.594 12:19:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.594 12:19:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.594 12:19:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.594 12:19:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.594 12:19:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:22.594 12:19:45 thread -- scripts/common.sh@345 -- # : 1 00:09:22.594 12:19:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.594 12:19:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.594 12:19:45 thread -- scripts/common.sh@365 -- # decimal 1 00:09:22.594 12:19:45 thread -- scripts/common.sh@353 -- # local d=1 00:09:22.594 12:19:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.594 12:19:45 thread -- scripts/common.sh@355 -- # echo 1 00:09:22.594 12:19:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.594 12:19:45 thread -- scripts/common.sh@366 -- # decimal 2 00:09:22.594 12:19:45 thread -- scripts/common.sh@353 -- # local d=2 00:09:22.594 12:19:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.594 12:19:45 thread -- scripts/common.sh@355 -- # echo 2 00:09:22.594 12:19:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.594 12:19:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.594 12:19:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.594 12:19:45 thread -- scripts/common.sh@368 -- # return 0 00:09:22.594 12:19:45 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.594 12:19:45 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:22.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.594 --rc genhtml_branch_coverage=1 00:09:22.594 --rc genhtml_function_coverage=1 00:09:22.594 --rc genhtml_legend=1 00:09:22.594 --rc geninfo_all_blocks=1 00:09:22.594 --rc geninfo_unexecuted_blocks=1 00:09:22.594 00:09:22.594 ' 00:09:22.594 12:19:45 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:22.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.594 --rc genhtml_branch_coverage=1 00:09:22.594 --rc genhtml_function_coverage=1 00:09:22.594 --rc genhtml_legend=1 00:09:22.594 --rc geninfo_all_blocks=1 00:09:22.594 --rc geninfo_unexecuted_blocks=1 00:09:22.594 00:09:22.594 ' 00:09:22.594 12:19:45 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:22.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.594 --rc genhtml_branch_coverage=1 00:09:22.594 --rc genhtml_function_coverage=1 00:09:22.594 --rc genhtml_legend=1 00:09:22.594 --rc geninfo_all_blocks=1 00:09:22.594 --rc geninfo_unexecuted_blocks=1 00:09:22.594 00:09:22.594 ' 00:09:22.594 12:19:45 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:22.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.595 --rc genhtml_branch_coverage=1 00:09:22.595 --rc genhtml_function_coverage=1 00:09:22.595 --rc genhtml_legend=1 00:09:22.595 --rc geninfo_all_blocks=1 00:09:22.595 --rc geninfo_unexecuted_blocks=1 00:09:22.595 00:09:22.595 ' 00:09:22.595 12:19:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:22.595 12:19:45 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:22.595 12:19:45 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.595 12:19:45 thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.595 ************************************ 00:09:22.595 START TEST thread_poller_perf 00:09:22.595 ************************************ 00:09:22.595 12:19:45 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:22.595 [2024-10-07 12:19:45.606410] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:22.595 [2024-10-07 12:19:45.606628] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60970 ] 00:09:22.595 [2024-10-07 12:19:45.778321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.854 [2024-10-07 12:19:45.986477] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.854 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:24.232 [2024-10-07T12:19:47.523Z] ====================================== 00:09:24.232 [2024-10-07T12:19:47.523Z] busy:2500497980 (cyc) 00:09:24.232 [2024-10-07T12:19:47.523Z] total_run_count: 403000 00:09:24.232 [2024-10-07T12:19:47.523Z] tsc_hz: 2490000000 (cyc) 00:09:24.232 [2024-10-07T12:19:47.523Z] ====================================== 00:09:24.232 [2024-10-07T12:19:47.523Z] poller_cost: 6204 (cyc), 2491 (nsec) 00:09:24.232 00:09:24.232 real 0m1.821s 00:09:24.232 ************************************ 00:09:24.232 END TEST thread_poller_perf 00:09:24.232 ************************************ 00:09:24.232 user 0m1.587s 00:09:24.232 sys 0m0.126s 00:09:24.232 12:19:47 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.232 12:19:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:24.232 12:19:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:24.232 12:19:47 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:24.232 12:19:47 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.232 12:19:47 thread -- common/autotest_common.sh@10 -- # set +x 00:09:24.232 ************************************ 00:09:24.232 START TEST thread_poller_perf 00:09:24.232 ************************************ 00:09:24.232 12:19:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:24.232 [2024-10-07 12:19:47.507280] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:24.232 [2024-10-07 12:19:47.507518] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61012 ] 00:09:24.491 [2024-10-07 12:19:47.676050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.750 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:24.750 [2024-10-07 12:19:47.886554] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.125 [2024-10-07T12:19:49.416Z] ====================================== 00:09:26.125 [2024-10-07T12:19:49.416Z] busy:2493778060 (cyc) 00:09:26.125 [2024-10-07T12:19:49.416Z] total_run_count: 5304000 00:09:26.125 [2024-10-07T12:19:49.416Z] tsc_hz: 2490000000 (cyc) 00:09:26.125 [2024-10-07T12:19:49.416Z] ====================================== 00:09:26.125 [2024-10-07T12:19:49.416Z] poller_cost: 470 (cyc), 188 (nsec) 00:09:26.125 00:09:26.125 real 0m1.811s 00:09:26.125 user 0m1.589s 00:09:26.125 sys 0m0.112s 00:09:26.125 12:19:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.125 ************************************ 00:09:26.125 END TEST thread_poller_perf 00:09:26.125 ************************************ 00:09:26.125 12:19:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:26.125 12:19:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:26.125 ************************************ 00:09:26.125 END TEST thread 00:09:26.125 ************************************ 00:09:26.125 00:09:26.125 real 0m4.013s 00:09:26.126 user 0m3.350s 00:09:26.126 sys 0m0.452s 00:09:26.126 12:19:49 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.126 12:19:49 thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.126 12:19:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:26.126 12:19:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:26.126 12:19:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:26.126 12:19:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.126 12:19:49 -- common/autotest_common.sh@10 -- # set +x 00:09:26.126 ************************************ 00:09:26.126 START TEST app_cmdline 00:09:26.126 ************************************ 00:09:26.126 12:19:49 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:26.385 * Looking for test storage... 00:09:26.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.385 12:19:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:26.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.385 --rc genhtml_branch_coverage=1 00:09:26.385 --rc genhtml_function_coverage=1 00:09:26.385 --rc genhtml_legend=1 00:09:26.385 --rc geninfo_all_blocks=1 00:09:26.385 --rc geninfo_unexecuted_blocks=1 00:09:26.385 00:09:26.385 ' 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:26.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.385 --rc genhtml_branch_coverage=1 00:09:26.385 --rc genhtml_function_coverage=1 00:09:26.385 --rc genhtml_legend=1 00:09:26.385 --rc geninfo_all_blocks=1 00:09:26.385 --rc geninfo_unexecuted_blocks=1 00:09:26.385 00:09:26.385 ' 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:26.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.385 --rc genhtml_branch_coverage=1 00:09:26.385 --rc genhtml_function_coverage=1 00:09:26.385 --rc genhtml_legend=1 00:09:26.385 --rc geninfo_all_blocks=1 00:09:26.385 --rc geninfo_unexecuted_blocks=1 00:09:26.385 00:09:26.385 ' 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:26.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.385 --rc genhtml_branch_coverage=1 00:09:26.385 --rc genhtml_function_coverage=1 00:09:26.385 --rc genhtml_legend=1 00:09:26.385 --rc geninfo_all_blocks=1 00:09:26.385 --rc geninfo_unexecuted_blocks=1 00:09:26.385 00:09:26.385 ' 00:09:26.385 12:19:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:26.385 12:19:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61101 00:09:26.385 12:19:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:26.385 12:19:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61101 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61101 ']' 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.385 12:19:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:26.644 [2024-10-07 12:19:49.729526] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:26.644 [2024-10-07 12:19:49.729651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61101 ] 00:09:26.644 [2024-10-07 12:19:49.901413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.902 [2024-10-07 12:19:50.108336] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.838 12:19:50 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.838 12:19:50 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:27.839 12:19:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:28.097 { 00:09:28.097 "version": "SPDK v25.01-pre git sha1 3950cd1bb", 00:09:28.097 "fields": { 00:09:28.097 "major": 25, 00:09:28.097 "minor": 1, 00:09:28.097 "patch": 0, 00:09:28.097 "suffix": "-pre", 00:09:28.097 "commit": "3950cd1bb" 00:09:28.097 } 00:09:28.097 } 00:09:28.097 12:19:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:28.097 12:19:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:28.097 12:19:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:28.097 12:19:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:28.097 12:19:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:28.097 12:19:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:28.097 12:19:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.097 12:19:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:28.097 12:19:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:28.097 12:19:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:28.097 12:19:51 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:28.356 request: 00:09:28.356 { 00:09:28.356 "method": "env_dpdk_get_mem_stats", 00:09:28.356 "req_id": 1 00:09:28.356 } 00:09:28.356 Got JSON-RPC error response 00:09:28.356 response: 00:09:28.356 { 00:09:28.356 "code": -32601, 00:09:28.356 "message": "Method not found" 00:09:28.356 } 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:28.356 12:19:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61101 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61101 ']' 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61101 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61101 00:09:28.356 killing process with pid 61101 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61101' 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@969 -- # kill 61101 00:09:28.356 12:19:51 app_cmdline -- common/autotest_common.sh@974 -- # wait 61101 00:09:30.890 ************************************ 00:09:30.890 END TEST app_cmdline 00:09:30.890 ************************************ 00:09:30.890 00:09:30.890 real 0m4.599s 00:09:30.890 user 0m4.743s 00:09:30.890 sys 0m0.670s 00:09:30.890 12:19:53 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.890 12:19:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:30.890 12:19:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:30.890 12:19:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:30.890 12:19:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.890 12:19:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.890 ************************************ 00:09:30.890 START TEST version 00:09:30.890 ************************************ 00:09:30.890 12:19:54 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:31.180 * Looking for test storage... 00:09:31.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:31.180 12:19:54 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:31.180 12:19:54 version -- common/autotest_common.sh@1681 -- # lcov --version 00:09:31.180 12:19:54 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:31.180 12:19:54 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:31.180 12:19:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.180 12:19:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.180 12:19:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.181 12:19:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.181 12:19:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.181 12:19:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.181 12:19:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.181 12:19:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.181 12:19:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.181 12:19:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.181 12:19:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.181 12:19:54 version -- scripts/common.sh@344 -- # case "$op" in 00:09:31.181 12:19:54 version -- scripts/common.sh@345 -- # : 1 00:09:31.181 12:19:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.181 12:19:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.181 12:19:54 version -- scripts/common.sh@365 -- # decimal 1 00:09:31.181 12:19:54 version -- scripts/common.sh@353 -- # local d=1 00:09:31.181 12:19:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.181 12:19:54 version -- scripts/common.sh@355 -- # echo 1 00:09:31.181 12:19:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.181 12:19:54 version -- scripts/common.sh@366 -- # decimal 2 00:09:31.181 12:19:54 version -- scripts/common.sh@353 -- # local d=2 00:09:31.181 12:19:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.181 12:19:54 version -- scripts/common.sh@355 -- # echo 2 00:09:31.181 12:19:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.181 12:19:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.181 12:19:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.181 12:19:54 version -- scripts/common.sh@368 -- # return 0 00:09:31.181 12:19:54 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.181 12:19:54 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:31.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.181 --rc genhtml_branch_coverage=1 00:09:31.181 --rc genhtml_function_coverage=1 00:09:31.181 --rc genhtml_legend=1 00:09:31.181 --rc geninfo_all_blocks=1 00:09:31.181 --rc geninfo_unexecuted_blocks=1 00:09:31.181 00:09:31.181 ' 00:09:31.181 12:19:54 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:31.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.181 --rc genhtml_branch_coverage=1 00:09:31.181 --rc genhtml_function_coverage=1 00:09:31.181 --rc genhtml_legend=1 00:09:31.181 --rc geninfo_all_blocks=1 00:09:31.181 --rc geninfo_unexecuted_blocks=1 00:09:31.181 00:09:31.181 ' 00:09:31.181 12:19:54 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:31.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.181 --rc genhtml_branch_coverage=1 00:09:31.181 --rc genhtml_function_coverage=1 00:09:31.181 --rc genhtml_legend=1 00:09:31.181 --rc geninfo_all_blocks=1 00:09:31.181 --rc geninfo_unexecuted_blocks=1 00:09:31.181 00:09:31.181 ' 00:09:31.181 12:19:54 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:31.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.181 --rc genhtml_branch_coverage=1 00:09:31.181 --rc genhtml_function_coverage=1 00:09:31.181 --rc genhtml_legend=1 00:09:31.181 --rc geninfo_all_blocks=1 00:09:31.181 --rc geninfo_unexecuted_blocks=1 00:09:31.181 00:09:31.181 ' 00:09:31.181 12:19:54 version -- app/version.sh@17 -- # get_header_version major 00:09:31.181 12:19:54 version -- app/version.sh@14 -- # cut -f2 00:09:31.181 12:19:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:31.181 12:19:54 version -- app/version.sh@14 -- # tr -d '"' 00:09:31.181 12:19:54 version -- app/version.sh@17 -- # major=25 00:09:31.181 12:19:54 version -- app/version.sh@18 -- # get_header_version minor 00:09:31.181 12:19:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:31.181 12:19:54 version -- app/version.sh@14 -- # cut -f2 00:09:31.181 12:19:54 version -- app/version.sh@14 -- # tr -d '"' 00:09:31.181 12:19:54 version -- app/version.sh@18 -- # minor=1 00:09:31.181 12:19:54 version -- app/version.sh@19 -- # get_header_version patch 00:09:31.181 12:19:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:31.181 12:19:54 version -- app/version.sh@14 -- # cut -f2 00:09:31.181 12:19:54 version -- app/version.sh@14 -- # tr -d '"' 00:09:31.181 12:19:54 version -- app/version.sh@19 -- # patch=0 00:09:31.181 12:19:54 version -- app/version.sh@20 -- # get_header_version suffix 00:09:31.181 12:19:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:31.181 12:19:54 version -- app/version.sh@14 -- # cut -f2 00:09:31.181 12:19:54 version -- app/version.sh@14 -- # tr -d '"' 00:09:31.181 12:19:54 version -- app/version.sh@20 -- # suffix=-pre 00:09:31.181 12:19:54 version -- app/version.sh@22 -- # version=25.1 00:09:31.181 12:19:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:31.181 12:19:54 version -- app/version.sh@28 -- # version=25.1rc0 00:09:31.181 12:19:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:31.181 12:19:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:31.181 12:19:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:31.181 12:19:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:31.181 ************************************ 00:09:31.181 END TEST version 00:09:31.181 ************************************ 00:09:31.181 00:09:31.181 real 0m0.329s 00:09:31.181 user 0m0.198s 00:09:31.181 sys 0m0.187s 00:09:31.181 12:19:54 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.181 12:19:54 version -- common/autotest_common.sh@10 -- # set +x 00:09:31.441 12:19:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:31.441 12:19:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:31.441 12:19:54 -- spdk/autotest.sh@194 -- # uname -s 00:09:31.441 12:19:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:31.441 12:19:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:31.441 12:19:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:31.441 12:19:54 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:09:31.441 12:19:54 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:31.441 12:19:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:31.441 12:19:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.441 12:19:54 -- common/autotest_common.sh@10 -- # set +x 00:09:31.441 ************************************ 00:09:31.441 START TEST blockdev_nvme 00:09:31.441 ************************************ 00:09:31.441 12:19:54 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:31.441 * Looking for test storage... 00:09:31.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:31.441 12:19:54 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:31.441 12:19:54 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:09:31.441 12:19:54 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:31.441 12:19:54 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.441 12:19:54 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:09:31.441 12:19:54 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.441 12:19:54 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.441 --rc genhtml_branch_coverage=1 00:09:31.441 --rc genhtml_function_coverage=1 00:09:31.441 --rc genhtml_legend=1 00:09:31.441 --rc geninfo_all_blocks=1 00:09:31.441 --rc geninfo_unexecuted_blocks=1 00:09:31.441 00:09:31.441 ' 00:09:31.441 12:19:54 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.441 --rc genhtml_branch_coverage=1 00:09:31.441 --rc genhtml_function_coverage=1 00:09:31.441 --rc genhtml_legend=1 00:09:31.441 --rc geninfo_all_blocks=1 00:09:31.441 --rc geninfo_unexecuted_blocks=1 00:09:31.441 00:09:31.441 ' 00:09:31.441 12:19:54 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.441 --rc genhtml_branch_coverage=1 00:09:31.441 --rc genhtml_function_coverage=1 00:09:31.441 --rc genhtml_legend=1 00:09:31.441 --rc geninfo_all_blocks=1 00:09:31.441 --rc geninfo_unexecuted_blocks=1 00:09:31.441 00:09:31.441 ' 00:09:31.441 12:19:54 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.441 --rc genhtml_branch_coverage=1 00:09:31.441 --rc genhtml_function_coverage=1 00:09:31.441 --rc genhtml_legend=1 00:09:31.441 --rc geninfo_all_blocks=1 00:09:31.441 --rc geninfo_unexecuted_blocks=1 00:09:31.441 00:09:31.441 ' 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:31.441 12:19:54 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:09:31.441 12:19:54 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61297 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:31.442 12:19:54 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61297 00:09:31.442 12:19:54 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 61297 ']' 00:09:31.442 12:19:54 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.442 12:19:54 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.442 12:19:54 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.442 12:19:54 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.442 12:19:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.700 [2024-10-07 12:19:54.823797] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:31.701 [2024-10-07 12:19:54.824158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61297 ] 00:09:31.959 [2024-10-07 12:19:54.994877] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.959 [2024-10-07 12:19:55.203177] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.896 12:19:56 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.896 12:19:56 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:09:32.896 12:19:56 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:09:32.896 12:19:56 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:09:32.896 12:19:56 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:32.896 12:19:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:32.896 12:19:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.155 12:19:56 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:33.155 12:19:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.155 12:19:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.414 12:19:56 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.414 12:19:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:09:33.414 12:19:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.414 12:19:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.414 12:19:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.414 12:19:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:33.414 12:19:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:33.414 12:19:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.414 12:19:56 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.674 12:19:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:33.674 12:19:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:33.675 12:19:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "11dbdf57-80b0-41b5-a4c4-716f3f4306a6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "11dbdf57-80b0-41b5-a4c4-716f3f4306a6",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f1a8cf23-811d-4ecf-8d22-9a14943d1f7c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f1a8cf23-811d-4ecf-8d22-9a14943d1f7c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e2d6bbf3-9ece-4d75-a448-e02387c531ad"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e2d6bbf3-9ece-4d75-a448-e02387c531ad",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "425778ab-7076-4e45-b13c-a58df0917aad"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "425778ab-7076-4e45-b13c-a58df0917aad",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "62ae338c-7f36-4aad-ad8e-3e2905ca216b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "62ae338c-7f36-4aad-ad8e-3e2905ca216b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "aab2a352-a401-443e-818c-48fc2a3c36e9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "aab2a352-a401-443e-818c-48fc2a3c36e9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:33.675 12:19:56 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:33.675 12:19:56 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:09:33.675 12:19:56 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:33.675 12:19:56 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61297 00:09:33.675 12:19:56 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 61297 ']' 00:09:33.675 12:19:56 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 61297 00:09:33.675 12:19:56 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:09:33.675 12:19:56 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.675 12:19:56 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61297 00:09:33.675 killing process with pid 61297 00:09:33.675 12:19:56 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.675 12:19:56 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.675 12:19:56 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61297' 00:09:33.675 12:19:56 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 61297 00:09:33.675 12:19:56 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 61297 00:09:36.211 12:19:59 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:36.211 12:19:59 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:36.211 12:19:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:36.211 12:19:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.211 12:19:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.211 ************************************ 00:09:36.211 START TEST bdev_hello_world 00:09:36.211 ************************************ 00:09:36.211 12:19:59 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:36.211 [2024-10-07 12:19:59.379473] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:36.211 [2024-10-07 12:19:59.379598] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61392 ] 00:09:36.470 [2024-10-07 12:19:59.550386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.728 [2024-10-07 12:19:59.762824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.297 [2024-10-07 12:20:00.402340] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:37.297 [2024-10-07 12:20:00.402393] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:37.297 [2024-10-07 12:20:00.402431] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:37.297 [2024-10-07 12:20:00.405523] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:37.297 [2024-10-07 12:20:00.406217] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:37.297 [2024-10-07 12:20:00.406358] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:37.297 [2024-10-07 12:20:00.406699] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:37.297 00:09:37.297 [2024-10-07 12:20:00.406816] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:38.674 00:09:38.674 ************************************ 00:09:38.674 END TEST bdev_hello_world 00:09:38.674 ************************************ 00:09:38.674 real 0m2.333s 00:09:38.674 user 0m1.959s 00:09:38.674 sys 0m0.263s 00:09:38.674 12:20:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.674 12:20:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:38.674 12:20:01 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:38.674 12:20:01 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.674 12:20:01 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.674 12:20:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.674 ************************************ 00:09:38.674 START TEST bdev_bounds 00:09:38.674 ************************************ 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61440 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:38.674 Process bdevio pid: 61440 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61440' 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61440 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61440 ']' 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.674 12:20:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:38.674 [2024-10-07 12:20:01.788911] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:38.674 [2024-10-07 12:20:01.789044] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61440 ] 00:09:38.674 [2024-10-07 12:20:01.962586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:38.932 [2024-10-07 12:20:02.178515] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.932 [2024-10-07 12:20:02.178667] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.932 [2024-10-07 12:20:02.178693] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.869 12:20:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.870 12:20:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:09:39.870 12:20:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:39.870 I/O targets: 00:09:39.870 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:39.870 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:39.870 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:39.870 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:39.870 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:39.870 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:39.870 00:09:39.870 00:09:39.870 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.870 http://cunit.sourceforge.net/ 00:09:39.870 00:09:39.870 00:09:39.870 Suite: bdevio tests on: Nvme3n1 00:09:39.870 Test: blockdev write read block ...passed 00:09:39.870 Test: blockdev write zeroes read block ...passed 00:09:39.870 Test: blockdev write zeroes read no split ...passed 00:09:39.870 Test: blockdev write zeroes read split ...passed 00:09:39.870 Test: blockdev write zeroes read split partial ...passed 00:09:39.870 Test: blockdev reset ...[2024-10-07 12:20:03.021303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:39.870 passed 00:09:39.870 Test: blockdev write read 8 blocks ...[2024-10-07 12:20:03.025806] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:39.870 passed 00:09:39.870 Test: blockdev write read size > 128k ...passed 00:09:39.870 Test: blockdev write read invalid size ...passed 00:09:39.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:39.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:39.870 Test: blockdev write read max offset ...passed 00:09:39.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:39.870 Test: blockdev writev readv 8 blocks ...passed 00:09:39.870 Test: blockdev writev readv 30 x 1block ...passed 00:09:39.870 Test: blockdev writev readv block ...passed 00:09:39.870 Test: blockdev writev readv size > 128k ...passed 00:09:39.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:39.870 Test: blockdev comparev and writev ...[2024-10-07 12:20:03.035376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b520a000 len:0x1000 00:09:39.870 [2024-10-07 12:20:03.035451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:39.870 passed 00:09:39.870 Test: blockdev nvme passthru rw ...passed 00:09:39.870 Test: blockdev nvme passthru vendor specific ...passed 00:09:39.870 Test: blockdev nvme admin passthru ...[2024-10-07 12:20:03.036671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:39.870 [2024-10-07 12:20:03.036718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:39.870 passed 00:09:39.870 Test: blockdev copy ...passed 00:09:39.870 Suite: bdevio tests on: Nvme2n3 00:09:39.870 Test: blockdev write read block ...passed 00:09:39.870 Test: blockdev write zeroes read block ...passed 00:09:39.870 Test: blockdev write zeroes read no split ...passed 00:09:39.870 Test: blockdev write zeroes read split ...passed 00:09:39.870 Test: blockdev write zeroes read split partial ...passed 00:09:39.870 Test: blockdev reset ...[2024-10-07 12:20:03.110544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:39.870 [2024-10-07 12:20:03.115775] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:39.870 passed 00:09:39.870 Test: blockdev write read 8 blocks ...passed 00:09:39.870 Test: blockdev write read size > 128k ...passed 00:09:39.870 Test: blockdev write read invalid size ...passed 00:09:39.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:39.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:39.870 Test: blockdev write read max offset ...passed 00:09:39.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:39.870 Test: blockdev writev readv 8 blocks ...passed 00:09:39.870 Test: blockdev writev readv 30 x 1block ...passed 00:09:39.870 Test: blockdev writev readv block ...passed 00:09:39.870 Test: blockdev writev readv size > 128k ...passed 00:09:39.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:39.870 Test: blockdev comparev and writev ...[2024-10-07 12:20:03.125275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x299204000 len:0x1000 00:09:39.870 [2024-10-07 12:20:03.125327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:39.870 passed 00:09:39.870 Test: blockdev nvme passthru rw ...passed 00:09:39.870 Test: blockdev nvme passthru vendor specific ...[2024-10-07 12:20:03.126292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:09:39.870 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:09:39.870 [2024-10-07 12:20:03.126487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:39.870 passed 00:09:39.870 Test: blockdev copy ...passed 00:09:39.870 Suite: bdevio tests on: Nvme2n2 00:09:39.870 Test: blockdev write read block ...passed 00:09:39.870 Test: blockdev write zeroes read block ...passed 00:09:39.870 Test: blockdev write zeroes read no split ...passed 00:09:40.130 Test: blockdev write zeroes read split ...passed 00:09:40.130 Test: blockdev write zeroes read split partial ...passed 00:09:40.130 Test: blockdev reset ...[2024-10-07 12:20:03.199140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:40.130 passed 00:09:40.130 Test: blockdev write read 8 blocks ...[2024-10-07 12:20:03.203992] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:40.130 passed 00:09:40.130 Test: blockdev write read size > 128k ...passed 00:09:40.130 Test: blockdev write read invalid size ...passed 00:09:40.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.130 Test: blockdev write read max offset ...passed 00:09:40.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.130 Test: blockdev writev readv 8 blocks ...passed 00:09:40.130 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.130 Test: blockdev writev readv block ...passed 00:09:40.130 Test: blockdev writev readv size > 128k ...passed 00:09:40.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.130 Test: blockdev comparev and writev ...[2024-10-07 12:20:03.213285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:09:40.130 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c9e3a000 len:0x1000 00:09:40.130 [2024-10-07 12:20:03.213467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:40.130 passed 00:09:40.130 Test: blockdev nvme passthru vendor specific ...passed 00:09:40.130 Test: blockdev nvme admin passthru ...[2024-10-07 12:20:03.214436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:40.130 [2024-10-07 12:20:03.214479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:40.130 passed 00:09:40.130 Test: blockdev copy ...passed 00:09:40.130 Suite: bdevio tests on: Nvme2n1 00:09:40.130 Test: blockdev write read block ...passed 00:09:40.130 Test: blockdev write zeroes read block ...passed 00:09:40.130 Test: blockdev write zeroes read no split ...passed 00:09:40.130 Test: blockdev write zeroes read split ...passed 00:09:40.130 Test: blockdev write zeroes read split partial ...passed 00:09:40.130 Test: blockdev reset ...[2024-10-07 12:20:03.283891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:40.130 [2024-10-07 12:20:03.288773] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:40.130 passed 00:09:40.130 Test: blockdev write read 8 blocks ...passed 00:09:40.130 Test: blockdev write read size > 128k ...passed 00:09:40.130 Test: blockdev write read invalid size ...passed 00:09:40.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.130 Test: blockdev write read max offset ...passed 00:09:40.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.130 Test: blockdev writev readv 8 blocks ...passed 00:09:40.130 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.130 Test: blockdev writev readv block ...passed 00:09:40.130 Test: blockdev writev readv size > 128k ...passed 00:09:40.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.130 Test: blockdev comparev and writev ...[2024-10-07 12:20:03.299414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9e34000 len:0x1000 00:09:40.130 [2024-10-07 12:20:03.299611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:40.130 passed 00:09:40.130 Test: blockdev nvme passthru rw ...passed 00:09:40.130 Test: blockdev nvme passthru vendor specific ...passed 00:09:40.130 Test: blockdev nvme admin passthru ...[2024-10-07 12:20:03.301084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:40.130 [2024-10-07 12:20:03.301124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:40.130 passed 00:09:40.130 Test: blockdev copy ...passed 00:09:40.130 Suite: bdevio tests on: Nvme1n1 00:09:40.130 Test: blockdev write read block ...passed 00:09:40.130 Test: blockdev write zeroes read block ...passed 00:09:40.130 Test: blockdev write zeroes read no split ...passed 00:09:40.130 Test: blockdev write zeroes read split ...passed 00:09:40.130 Test: blockdev write zeroes read split partial ...passed 00:09:40.130 Test: blockdev reset ...[2024-10-07 12:20:03.373996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:40.130 [2024-10-07 12:20:03.378359] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:40.130 passed 00:09:40.130 Test: blockdev write read 8 blocks ...passed 00:09:40.130 Test: blockdev write read size > 128k ...passed 00:09:40.130 Test: blockdev write read invalid size ...passed 00:09:40.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.130 Test: blockdev write read max offset ...passed 00:09:40.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.130 Test: blockdev writev readv 8 blocks ...passed 00:09:40.130 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.130 Test: blockdev writev readv block ...passed 00:09:40.130 Test: blockdev writev readv size > 128k ...passed 00:09:40.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.130 Test: blockdev comparev and writev ...[2024-10-07 12:20:03.388200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9e30000 len:0x1000 00:09:40.130 [2024-10-07 12:20:03.388402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:40.130 passed 00:09:40.130 Test: blockdev nvme passthru rw ...passed 00:09:40.130 Test: blockdev nvme passthru vendor specific ...[2024-10-07 12:20:03.389884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:40.130 [2024-10-07 12:20:03.390077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:40.130 passed 00:09:40.130 Test: blockdev nvme admin passthru ...passed 00:09:40.130 Test: blockdev copy ...passed 00:09:40.130 Suite: bdevio tests on: Nvme0n1 00:09:40.130 Test: blockdev write read block ...passed 00:09:40.130 Test: blockdev write zeroes read block ...passed 00:09:40.130 Test: blockdev write zeroes read no split ...passed 00:09:40.390 Test: blockdev write zeroes read split ...passed 00:09:40.390 Test: blockdev write zeroes read split partial ...passed 00:09:40.390 Test: blockdev reset ...[2024-10-07 12:20:03.463696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:40.390 [2024-10-07 12:20:03.468191] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:40.390 passed 00:09:40.390 Test: blockdev write read 8 blocks ...passed 00:09:40.390 Test: blockdev write read size > 128k ...passed 00:09:40.390 Test: blockdev write read invalid size ...passed 00:09:40.390 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.390 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.390 Test: blockdev write read max offset ...passed 00:09:40.390 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.390 Test: blockdev writev readv 8 blocks ...passed 00:09:40.390 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.390 Test: blockdev writev readv block ...passed 00:09:40.390 Test: blockdev writev readv size > 128k ...passed 00:09:40.390 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.390 Test: blockdev comparev and writev ...[2024-10-07 12:20:03.476241] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:40.390 separate metadata which is not supported yet. 00:09:40.390 passed 00:09:40.390 Test: blockdev nvme passthru rw ...passed 00:09:40.390 Test: blockdev nvme passthru vendor specific ...passed 00:09:40.390 Test: blockdev nvme admin passthru ...[2024-10-07 12:20:03.477144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:40.390 [2024-10-07 12:20:03.477225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:40.390 passed 00:09:40.390 Test: blockdev copy ...passed 00:09:40.390 00:09:40.390 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.390 suites 6 6 n/a 0 0 00:09:40.390 tests 138 138 138 0 0 00:09:40.390 asserts 893 893 893 0 n/a 00:09:40.390 00:09:40.390 Elapsed time = 1.418 seconds 00:09:40.390 0 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61440 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61440 ']' 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61440 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61440 00:09:40.390 killing process with pid 61440 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61440' 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61440 00:09:40.390 12:20:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61440 00:09:41.774 12:20:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:41.774 00:09:41.774 real 0m2.960s 00:09:41.774 user 0m7.244s 00:09:41.774 sys 0m0.414s 00:09:41.774 12:20:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.774 12:20:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:41.774 ************************************ 00:09:41.774 END TEST bdev_bounds 00:09:41.774 ************************************ 00:09:41.774 12:20:04 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:41.774 12:20:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:41.774 12:20:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.774 12:20:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.774 ************************************ 00:09:41.774 START TEST bdev_nbd 00:09:41.774 ************************************ 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61504 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61504 /var/tmp/spdk-nbd.sock 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61504 ']' 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.774 12:20:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:41.774 [2024-10-07 12:20:04.841431] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:41.774 [2024-10-07 12:20:04.841555] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.774 [2024-10-07 12:20:05.007281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.033 [2024-10-07 12:20:05.217819] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.971 12:20:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.971 12:20:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:09:42.971 12:20:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:42.971 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.971 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:42.971 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:42.972 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:42.972 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.972 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:42.972 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:42.972 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:42.972 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:42.972 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:42.972 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:42.972 12:20:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.972 1+0 records in 00:09:42.972 1+0 records out 00:09:42.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549567 s, 7.5 MB/s 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:42.972 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.231 1+0 records in 00:09:43.231 1+0 records out 00:09:43.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723627 s, 5.7 MB/s 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:43.231 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:43.490 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:43.490 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:43.490 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.491 1+0 records in 00:09:43.491 1+0 records out 00:09:43.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730734 s, 5.6 MB/s 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:43.491 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.750 1+0 records in 00:09:43.750 1+0 records out 00:09:43.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729134 s, 5.6 MB/s 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:43.750 12:20:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.010 1+0 records in 00:09:44.010 1+0 records out 00:09:44.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061612 s, 6.6 MB/s 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:44.010 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.269 1+0 records in 00:09:44.269 1+0 records out 00:09:44.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885588 s, 4.6 MB/s 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:44.269 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd0", 00:09:44.528 "bdev_name": "Nvme0n1" 00:09:44.528 }, 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd1", 00:09:44.528 "bdev_name": "Nvme1n1" 00:09:44.528 }, 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd2", 00:09:44.528 "bdev_name": "Nvme2n1" 00:09:44.528 }, 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd3", 00:09:44.528 "bdev_name": "Nvme2n2" 00:09:44.528 }, 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd4", 00:09:44.528 "bdev_name": "Nvme2n3" 00:09:44.528 }, 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd5", 00:09:44.528 "bdev_name": "Nvme3n1" 00:09:44.528 } 00:09:44.528 ]' 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd0", 00:09:44.528 "bdev_name": "Nvme0n1" 00:09:44.528 }, 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd1", 00:09:44.528 "bdev_name": "Nvme1n1" 00:09:44.528 }, 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd2", 00:09:44.528 "bdev_name": "Nvme2n1" 00:09:44.528 }, 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd3", 00:09:44.528 "bdev_name": "Nvme2n2" 00:09:44.528 }, 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd4", 00:09:44.528 "bdev_name": "Nvme2n3" 00:09:44.528 }, 00:09:44.528 { 00:09:44.528 "nbd_device": "/dev/nbd5", 00:09:44.528 "bdev_name": "Nvme3n1" 00:09:44.528 } 00:09:44.528 ]' 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.528 12:20:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:44.788 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:44.788 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:44.788 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:44.788 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:44.788 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:44.788 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:44.788 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:44.788 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:44.788 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.788 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:45.047 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:45.047 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:45.047 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:45.047 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.047 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.047 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:45.047 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:45.047 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.047 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.047 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:45.305 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:45.305 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:45.305 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:45.305 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.305 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.305 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:45.305 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:45.305 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.305 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.305 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:45.564 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:45.564 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:45.564 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:45.564 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.564 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.564 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:45.564 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:45.564 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.564 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.564 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:45.823 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:45.823 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:45.823 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:45.823 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.823 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.823 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:45.823 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:45.823 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.823 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.823 12:20:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.081 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:46.341 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:46.600 /dev/nbd0 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:46.600 1+0 records in 00:09:46.600 1+0 records out 00:09:46.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545042 s, 7.5 MB/s 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:46.600 12:20:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:46.872 /dev/nbd1 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:46.872 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:46.872 1+0 records in 00:09:46.872 1+0 records out 00:09:46.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072488 s, 5.7 MB/s 00:09:46.873 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.873 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:46.873 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.873 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:46.873 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:46.873 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:46.873 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:46.873 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:47.145 /dev/nbd10 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.145 1+0 records in 00:09:47.145 1+0 records out 00:09:47.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720188 s, 5.7 MB/s 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:47.145 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:47.404 /dev/nbd11 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.404 1+0 records in 00:09:47.404 1+0 records out 00:09:47.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732126 s, 5.6 MB/s 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:47.404 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:47.662 /dev/nbd12 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.662 1+0 records in 00:09:47.662 1+0 records out 00:09:47.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000812685 s, 5.0 MB/s 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:47.662 12:20:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:47.921 /dev/nbd13 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:47.921 1+0 records in 00:09:47.921 1+0 records out 00:09:47.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571836 s, 7.2 MB/s 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.921 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd0", 00:09:48.180 "bdev_name": "Nvme0n1" 00:09:48.180 }, 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd1", 00:09:48.180 "bdev_name": "Nvme1n1" 00:09:48.180 }, 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd10", 00:09:48.180 "bdev_name": "Nvme2n1" 00:09:48.180 }, 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd11", 00:09:48.180 "bdev_name": "Nvme2n2" 00:09:48.180 }, 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd12", 00:09:48.180 "bdev_name": "Nvme2n3" 00:09:48.180 }, 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd13", 00:09:48.180 "bdev_name": "Nvme3n1" 00:09:48.180 } 00:09:48.180 ]' 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd0", 00:09:48.180 "bdev_name": "Nvme0n1" 00:09:48.180 }, 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd1", 00:09:48.180 "bdev_name": "Nvme1n1" 00:09:48.180 }, 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd10", 00:09:48.180 "bdev_name": "Nvme2n1" 00:09:48.180 }, 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd11", 00:09:48.180 "bdev_name": "Nvme2n2" 00:09:48.180 }, 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd12", 00:09:48.180 "bdev_name": "Nvme2n3" 00:09:48.180 }, 00:09:48.180 { 00:09:48.180 "nbd_device": "/dev/nbd13", 00:09:48.180 "bdev_name": "Nvme3n1" 00:09:48.180 } 00:09:48.180 ]' 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:48.180 /dev/nbd1 00:09:48.180 /dev/nbd10 00:09:48.180 /dev/nbd11 00:09:48.180 /dev/nbd12 00:09:48.180 /dev/nbd13' 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:48.180 /dev/nbd1 00:09:48.180 /dev/nbd10 00:09:48.180 /dev/nbd11 00:09:48.180 /dev/nbd12 00:09:48.180 /dev/nbd13' 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:48.180 256+0 records in 00:09:48.180 256+0 records out 00:09:48.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121402 s, 86.4 MB/s 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.180 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:48.439 256+0 records in 00:09:48.439 256+0 records out 00:09:48.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124635 s, 8.4 MB/s 00:09:48.439 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.439 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:48.439 256+0 records in 00:09:48.439 256+0 records out 00:09:48.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125482 s, 8.4 MB/s 00:09:48.439 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.439 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:48.698 256+0 records in 00:09:48.698 256+0 records out 00:09:48.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125595 s, 8.3 MB/s 00:09:48.698 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.698 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:48.956 256+0 records in 00:09:48.956 256+0 records out 00:09:48.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126006 s, 8.3 MB/s 00:09:48.957 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.957 12:20:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:48.957 256+0 records in 00:09:48.957 256+0 records out 00:09:48.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125646 s, 8.3 MB/s 00:09:48.957 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.957 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:49.215 256+0 records in 00:09:49.215 256+0 records out 00:09:49.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125394 s, 8.4 MB/s 00:09:49.215 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:49.215 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:49.215 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:49.215 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.216 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:49.475 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:49.475 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:49.475 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:49.475 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.475 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.475 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:49.475 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:49.475 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.475 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.475 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:49.734 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:49.734 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:49.734 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:49.734 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.734 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.734 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:49.734 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:49.734 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.734 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.734 12:20:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:49.992 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:49.992 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:49.992 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:49.992 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.992 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.992 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:49.992 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:49.992 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.992 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.992 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:50.251 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.511 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:50.770 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:50.770 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:50.770 12:20:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:50.770 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:51.028 malloc_lvol_verify 00:09:51.028 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:51.287 1da23029-9fc1-45c0-87f6-011891157060 00:09:51.287 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:51.546 560dcbc4-182f-4a62-aef4-6c4e94128f4d 00:09:51.546 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:51.805 /dev/nbd0 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:51.805 mke2fs 1.47.0 (5-Feb-2023) 00:09:51.805 Discarding device blocks: 0/4096 done 00:09:51.805 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:51.805 00:09:51.805 Allocating group tables: 0/1 done 00:09:51.805 Writing inode tables: 0/1 done 00:09:51.805 Creating journal (1024 blocks): done 00:09:51.805 Writing superblocks and filesystem accounting information: 0/1 done 00:09:51.805 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.805 12:20:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61504 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61504 ']' 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61504 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61504 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.064 killing process with pid 61504 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61504' 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61504 00:09:52.064 12:20:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61504 00:09:53.443 ************************************ 00:09:53.443 END TEST bdev_nbd 00:09:53.443 ************************************ 00:09:53.443 12:20:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:53.443 00:09:53.443 real 0m11.798s 00:09:53.443 user 0m15.426s 00:09:53.443 sys 0m4.726s 00:09:53.443 12:20:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.443 12:20:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:53.443 12:20:16 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:53.443 12:20:16 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:09:53.443 skipping fio tests on NVMe due to multi-ns failures. 00:09:53.443 12:20:16 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:53.443 12:20:16 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:53.443 12:20:16 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:53.443 12:20:16 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:09:53.443 12:20:16 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.443 12:20:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:53.443 ************************************ 00:09:53.443 START TEST bdev_verify 00:09:53.443 ************************************ 00:09:53.443 12:20:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:53.443 [2024-10-07 12:20:16.697804] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:53.443 [2024-10-07 12:20:16.698005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61895 ] 00:09:53.702 [2024-10-07 12:20:16.870656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.961 [2024-10-07 12:20:17.084818] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.961 [2024-10-07 12:20:17.084846] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.528 Running I/O for 5 seconds... 00:09:56.842 18304.00 IOPS, 71.50 MiB/s [2024-10-07T12:20:21.070Z] 18464.00 IOPS, 72.12 MiB/s [2024-10-07T12:20:22.445Z] 18176.00 IOPS, 71.00 MiB/s [2024-10-07T12:20:23.013Z] 18416.00 IOPS, 71.94 MiB/s [2024-10-07T12:20:23.013Z] 18252.80 IOPS, 71.30 MiB/s 00:09:59.722 Latency(us) 00:09:59.722 [2024-10-07T12:20:23.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.722 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0x0 length 0xbd0bd 00:09:59.722 Nvme0n1 : 5.08 1336.47 5.22 0.00 0.00 95595.08 11212.18 83380.74 00:09:59.722 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:59.722 Nvme0n1 : 5.06 1669.37 6.52 0.00 0.00 76523.68 12844.00 73695.10 00:09:59.722 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0x0 length 0xa0000 00:09:59.722 Nvme1n1 : 5.08 1336.19 5.22 0.00 0.00 95451.81 10948.99 77485.13 00:09:59.722 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0xa0000 length 0xa0000 00:09:59.722 Nvme1n1 : 5.06 1668.88 6.52 0.00 0.00 76411.30 14528.46 69062.84 00:09:59.722 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0x0 length 0x80000 00:09:59.722 Nvme2n1 : 5.08 1335.90 5.22 0.00 0.00 95077.27 10527.87 78327.36 00:09:59.722 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0x80000 length 0x80000 00:09:59.722 Nvme2n1 : 5.06 1668.44 6.52 0.00 0.00 76333.04 14844.30 69905.07 00:09:59.722 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0x0 length 0x80000 00:09:59.722 Nvme2n2 : 5.08 1335.61 5.22 0.00 0.00 94917.04 10633.15 79590.71 00:09:59.722 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0x80000 length 0x80000 00:09:59.722 Nvme2n2 : 5.06 1667.98 6.52 0.00 0.00 76140.43 13475.68 68641.72 00:09:59.722 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0x0 length 0x80000 00:09:59.722 Nvme2n3 : 5.08 1335.34 5.22 0.00 0.00 94769.02 10685.79 80854.05 00:09:59.722 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0x80000 length 0x80000 00:09:59.722 Nvme2n3 : 5.07 1667.58 6.51 0.00 0.00 76040.65 13580.95 67378.38 00:09:59.722 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0x0 length 0x20000 00:09:59.722 Nvme3n1 : 5.08 1335.07 5.22 0.00 0.00 94686.53 10580.51 81696.28 00:09:59.722 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:59.722 Verification LBA range: start 0x20000 length 0x20000 00:09:59.722 Nvme3n1 : 5.07 1667.13 6.51 0.00 0.00 75935.73 12317.61 69905.07 00:09:59.722 [2024-10-07T12:20:23.013Z] =================================================================================================================== 00:09:59.722 [2024-10-07T12:20:23.013Z] Total : 18023.96 70.41 0.00 0.00 84627.07 10527.87 83380.74 00:10:01.632 00:10:01.632 real 0m7.802s 00:10:01.632 user 0m14.189s 00:10:01.632 sys 0m0.318s 00:10:01.633 12:20:24 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.633 12:20:24 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:01.633 ************************************ 00:10:01.633 END TEST bdev_verify 00:10:01.633 ************************************ 00:10:01.633 12:20:24 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:01.633 12:20:24 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:01.633 12:20:24 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.633 12:20:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:01.633 ************************************ 00:10:01.633 START TEST bdev_verify_big_io 00:10:01.633 ************************************ 00:10:01.633 12:20:24 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:01.633 [2024-10-07 12:20:24.577392] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:01.633 [2024-10-07 12:20:24.577501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61999 ] 00:10:01.633 [2024-10-07 12:20:24.749183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:01.891 [2024-10-07 12:20:24.966217] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.891 [2024-10-07 12:20:24.966251] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.827 Running I/O for 5 seconds... 00:10:07.490 2587.00 IOPS, 161.69 MiB/s [2024-10-07T12:20:31.347Z] 3520.50 IOPS, 220.03 MiB/s [2024-10-07T12:20:31.914Z] 3073.33 IOPS, 192.08 MiB/s [2024-10-07T12:20:32.175Z] 3051.75 IOPS, 190.73 MiB/s 00:10:08.884 Latency(us) 00:10:08.884 [2024-10-07T12:20:32.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.884 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0x0 length 0xbd0b 00:10:08.884 Nvme0n1 : 5.57 103.46 6.47 0.00 0.00 1189542.28 13686.23 1327354.04 00:10:08.884 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:08.884 Nvme0n1 : 5.40 222.26 13.89 0.00 0.00 562000.55 21687.42 562609.45 00:10:08.884 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0x0 length 0xa000 00:10:08.884 Nvme1n1 : 5.64 110.06 6.88 0.00 0.00 1061767.37 39374.24 1030889.18 00:10:08.884 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0xa000 length 0xa000 00:10:08.884 Nvme1n1 : 5.46 229.96 14.37 0.00 0.00 537589.24 20634.63 535658.10 00:10:08.884 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0x0 length 0x8000 00:10:08.884 Nvme2n1 : 5.68 118.06 7.38 0.00 0.00 961973.26 29899.16 1253237.82 00:10:08.884 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0x8000 length 0x8000 00:10:08.884 Nvme2n1 : 5.46 230.99 14.44 0.00 0.00 527080.64 21266.30 555871.61 00:10:08.884 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0x0 length 0x8000 00:10:08.884 Nvme2n2 : 5.80 143.33 8.96 0.00 0.00 760072.40 20002.96 970248.64 00:10:08.884 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0x8000 length 0x8000 00:10:08.884 Nvme2n2 : 5.46 229.66 14.35 0.00 0.00 520473.56 20424.07 562609.45 00:10:08.884 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0x0 length 0x8000 00:10:08.884 Nvme2n3 : 6.01 188.52 11.78 0.00 0.00 556651.15 7527.43 2236962.13 00:10:08.884 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0x8000 length 0x8000 00:10:08.884 Nvme2n3 : 5.47 234.13 14.63 0.00 0.00 504504.24 36005.32 545764.86 00:10:08.884 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0x0 length 0x2000 00:10:08.884 Nvme3n1 : 6.16 274.77 17.17 0.00 0.00 373561.17 1204.13 1718148.63 00:10:08.884 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:08.884 Verification LBA range: start 0x2000 length 0x2000 00:10:08.884 Nvme3n1 : 5.52 251.84 15.74 0.00 0.00 462792.35 1131.75 579454.05 00:10:08.884 [2024-10-07T12:20:32.175Z] =================================================================================================================== 00:10:08.884 [2024-10-07T12:20:32.175Z] Total : 2337.04 146.07 0.00 0.00 595503.54 1131.75 2236962.13 00:10:10.802 00:10:10.802 real 0m9.487s 00:10:10.802 user 0m17.519s 00:10:10.802 sys 0m0.358s 00:10:10.802 12:20:33 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.802 12:20:33 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:10.802 ************************************ 00:10:10.802 END TEST bdev_verify_big_io 00:10:10.802 ************************************ 00:10:10.802 12:20:34 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:10.802 12:20:34 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:10.802 12:20:34 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.802 12:20:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.802 ************************************ 00:10:10.802 START TEST bdev_write_zeroes 00:10:10.802 ************************************ 00:10:10.802 12:20:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:11.063 [2024-10-07 12:20:34.136106] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:11.063 [2024-10-07 12:20:34.136376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62119 ] 00:10:11.063 [2024-10-07 12:20:34.305179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.323 [2024-10-07 12:20:34.499421] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.889 Running I/O for 1 seconds... 00:10:13.264 79488.00 IOPS, 310.50 MiB/s 00:10:13.264 Latency(us) 00:10:13.264 [2024-10-07T12:20:36.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.264 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:13.264 Nvme0n1 : 1.02 13193.10 51.54 0.00 0.00 9684.61 8106.46 21897.97 00:10:13.264 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:13.264 Nvme1n1 : 1.02 13180.92 51.49 0.00 0.00 9682.21 8422.30 22319.09 00:10:13.264 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:13.264 Nvme2n1 : 1.02 13168.90 51.44 0.00 0.00 9667.79 8106.46 21687.42 00:10:13.264 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:13.264 Nvme2n2 : 1.02 13157.30 51.40 0.00 0.00 9635.80 8053.82 21897.97 00:10:13.264 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:13.264 Nvme2n3 : 1.02 13145.69 51.35 0.00 0.00 9628.29 8159.10 21687.42 00:10:13.264 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:13.264 Nvme3n1 : 1.02 13133.79 51.30 0.00 0.00 9588.79 6895.76 21055.74 00:10:13.264 [2024-10-07T12:20:36.555Z] =================================================================================================================== 00:10:13.264 [2024-10-07T12:20:36.555Z] Total : 78979.71 308.51 0.00 0.00 9647.92 6895.76 22319.09 00:10:14.200 00:10:14.200 real 0m3.368s 00:10:14.200 user 0m2.985s 00:10:14.200 sys 0m0.268s 00:10:14.200 ************************************ 00:10:14.200 END TEST bdev_write_zeroes 00:10:14.200 ************************************ 00:10:14.200 12:20:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.200 12:20:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:14.200 12:20:37 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:14.200 12:20:37 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:14.200 12:20:37 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.200 12:20:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.200 ************************************ 00:10:14.200 START TEST bdev_json_nonenclosed 00:10:14.200 ************************************ 00:10:14.200 12:20:37 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:14.459 [2024-10-07 12:20:37.590720] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:14.459 [2024-10-07 12:20:37.590863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62176 ] 00:10:14.717 [2024-10-07 12:20:37.769107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.717 [2024-10-07 12:20:37.972699] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.717 [2024-10-07 12:20:37.972790] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:14.717 [2024-10-07 12:20:37.972810] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:14.717 [2024-10-07 12:20:37.972822] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:15.284 00:10:15.284 real 0m0.877s 00:10:15.284 user 0m0.601s 00:10:15.284 sys 0m0.170s 00:10:15.284 12:20:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.284 ************************************ 00:10:15.284 12:20:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:15.284 END TEST bdev_json_nonenclosed 00:10:15.284 ************************************ 00:10:15.284 12:20:38 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:15.284 12:20:38 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:15.284 12:20:38 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.284 12:20:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:15.284 ************************************ 00:10:15.284 START TEST bdev_json_nonarray 00:10:15.284 ************************************ 00:10:15.284 12:20:38 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:15.284 [2024-10-07 12:20:38.555682] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:15.284 [2024-10-07 12:20:38.555821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62203 ] 00:10:15.543 [2024-10-07 12:20:38.731563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.801 [2024-10-07 12:20:38.937822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.801 [2024-10-07 12:20:38.937925] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:15.801 [2024-10-07 12:20:38.937963] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:15.801 [2024-10-07 12:20:38.937976] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:16.060 00:10:16.060 real 0m0.900s 00:10:16.060 user 0m0.633s 00:10:16.060 sys 0m0.158s 00:10:16.060 12:20:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.060 ************************************ 00:10:16.060 END TEST bdev_json_nonarray 00:10:16.060 ************************************ 00:10:16.060 12:20:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:16.319 12:20:39 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:16.319 00:10:16.319 real 0m44.944s 00:10:16.319 user 1m5.466s 00:10:16.319 sys 0m7.900s 00:10:16.319 12:20:39 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.319 ************************************ 00:10:16.319 END TEST blockdev_nvme 00:10:16.319 ************************************ 00:10:16.320 12:20:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.320 12:20:39 -- spdk/autotest.sh@209 -- # uname -s 00:10:16.320 12:20:39 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:16.320 12:20:39 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:16.320 12:20:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:16.320 12:20:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.320 12:20:39 -- common/autotest_common.sh@10 -- # set +x 00:10:16.320 ************************************ 00:10:16.320 START TEST blockdev_nvme_gpt 00:10:16.320 ************************************ 00:10:16.320 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:16.579 * Looking for test storage... 00:10:16.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:16.579 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:16.579 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:10:16.579 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:16.579 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.579 12:20:39 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:10:16.579 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.579 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:16.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.579 --rc genhtml_branch_coverage=1 00:10:16.579 --rc genhtml_function_coverage=1 00:10:16.579 --rc genhtml_legend=1 00:10:16.579 --rc geninfo_all_blocks=1 00:10:16.579 --rc geninfo_unexecuted_blocks=1 00:10:16.579 00:10:16.579 ' 00:10:16.579 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:16.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.579 --rc genhtml_branch_coverage=1 00:10:16.579 --rc genhtml_function_coverage=1 00:10:16.579 --rc genhtml_legend=1 00:10:16.579 --rc geninfo_all_blocks=1 00:10:16.579 --rc geninfo_unexecuted_blocks=1 00:10:16.579 00:10:16.579 ' 00:10:16.579 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:16.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.579 --rc genhtml_branch_coverage=1 00:10:16.579 --rc genhtml_function_coverage=1 00:10:16.579 --rc genhtml_legend=1 00:10:16.579 --rc geninfo_all_blocks=1 00:10:16.579 --rc geninfo_unexecuted_blocks=1 00:10:16.579 00:10:16.579 ' 00:10:16.579 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:16.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.579 --rc genhtml_branch_coverage=1 00:10:16.579 --rc genhtml_function_coverage=1 00:10:16.579 --rc genhtml_legend=1 00:10:16.579 --rc geninfo_all_blocks=1 00:10:16.579 --rc geninfo_unexecuted_blocks=1 00:10:16.579 00:10:16.579 ' 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:16.579 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62292 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:16.580 12:20:39 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62292 00:10:16.580 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 62292 ']' 00:10:16.580 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.580 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.580 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.580 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.580 12:20:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:16.580 [2024-10-07 12:20:39.857551] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:16.580 [2024-10-07 12:20:39.857693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62292 ] 00:10:16.839 [2024-10-07 12:20:40.036264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.097 [2024-10-07 12:20:40.238890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.101 12:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.101 12:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:10:18.101 12:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:18.101 12:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:10:18.101 12:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:18.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:18.927 Waiting for block devices as requested 00:10:18.927 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.927 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.927 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:19.186 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.460 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:24.460 BYT; 00:10:24.460 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:24.460 BYT; 00:10:24.460 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:24.460 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:24.461 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:24.461 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:24.461 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:24.461 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:24.461 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:24.461 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:24.461 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:24.461 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:24.461 12:20:47 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:24.461 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:24.461 12:20:47 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:10:25.397 The operation has completed successfully. 00:10:25.397 12:20:48 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:10:26.334 The operation has completed successfully. 00:10:26.334 12:20:49 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:27.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:27.859 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:27.859 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:27.859 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:28.118 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:28.118 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:10:28.118 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.118 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.118 [] 00:10:28.118 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.118 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:10:28.118 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:28.118 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:28.118 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:28.118 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:28.118 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.118 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.687 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.687 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:10:28.687 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.687 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.687 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.687 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:28.687 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.687 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:28.687 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.687 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:28.688 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8edd5979-5dae-4cf1-9c62-09f76b32105a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8edd5979-5dae-4cf1-9c62-09f76b32105a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4fd790fd-46dc-45b6-98bf-5cc1e2d6e4c0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4fd790fd-46dc-45b6-98bf-5cc1e2d6e4c0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5d628efe-0664-49bb-b7a5-dac2dcfbdbe7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5d628efe-0664-49bb-b7a5-dac2dcfbdbe7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "af5ab79c-38ba-4a4f-9b20-0c323bc43a8a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "af5ab79c-38ba-4a4f-9b20-0c323bc43a8a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ff6d16d2-927c-4dd8-be76-2e996bbc5f9d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ff6d16d2-927c-4dd8-be76-2e996bbc5f9d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:28.688 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:28.688 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:28.688 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:10:28.688 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:28.688 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62292 00:10:28.688 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 62292 ']' 00:10:28.688 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 62292 00:10:28.688 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:10:28.688 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.688 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62292 00:10:28.947 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.947 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.947 killing process with pid 62292 00:10:28.947 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62292' 00:10:28.947 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 62292 00:10:28.947 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 62292 00:10:31.482 12:20:54 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:31.482 12:20:54 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:31.482 12:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:31.482 12:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.482 12:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:31.482 ************************************ 00:10:31.482 START TEST bdev_hello_world 00:10:31.482 ************************************ 00:10:31.482 12:20:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:31.482 [2024-10-07 12:20:54.517837] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:31.482 [2024-10-07 12:20:54.517981] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62934 ] 00:10:31.482 [2024-10-07 12:20:54.698029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.741 [2024-10-07 12:20:54.955575] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.679 [2024-10-07 12:20:55.603997] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:32.679 [2024-10-07 12:20:55.604046] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:32.679 [2024-10-07 12:20:55.604066] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:32.679 [2024-10-07 12:20:55.606832] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:32.679 [2024-10-07 12:20:55.607553] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:32.679 [2024-10-07 12:20:55.607590] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:32.679 [2024-10-07 12:20:55.607846] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:32.679 00:10:32.679 [2024-10-07 12:20:55.607883] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:33.617 00:10:33.617 real 0m2.430s 00:10:33.617 user 0m2.036s 00:10:33.617 sys 0m0.285s 00:10:33.617 12:20:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.617 12:20:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:33.617 ************************************ 00:10:33.617 END TEST bdev_hello_world 00:10:33.617 ************************************ 00:10:33.876 12:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:10:33.876 12:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:33.876 12:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.876 12:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.876 ************************************ 00:10:33.876 START TEST bdev_bounds 00:10:33.876 ************************************ 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62987 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:33.876 Process bdevio pid: 62987 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62987' 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62987 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 62987 ']' 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.876 12:20:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:33.876 [2024-10-07 12:20:57.028871] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:33.876 [2024-10-07 12:20:57.029024] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62987 ] 00:10:34.135 [2024-10-07 12:20:57.200626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:34.135 [2024-10-07 12:20:57.414749] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.135 [2024-10-07 12:20:57.414942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.135 [2024-10-07 12:20:57.415007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.088 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.088 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:10:35.088 12:20:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:35.088 I/O targets: 00:10:35.088 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:35.088 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:10:35.088 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:10:35.088 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:35.088 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:35.088 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:35.088 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:35.088 00:10:35.088 00:10:35.088 CUnit - A unit testing framework for C - Version 2.1-3 00:10:35.088 http://cunit.sourceforge.net/ 00:10:35.088 00:10:35.088 00:10:35.088 Suite: bdevio tests on: Nvme3n1 00:10:35.088 Test: blockdev write read block ...passed 00:10:35.088 Test: blockdev write zeroes read block ...passed 00:10:35.088 Test: blockdev write zeroes read no split ...passed 00:10:35.088 Test: blockdev write zeroes read split ...passed 00:10:35.088 Test: blockdev write zeroes read split partial ...passed 00:10:35.088 Test: blockdev reset ...[2024-10-07 12:20:58.291699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:10:35.088 [2024-10-07 12:20:58.296246] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:35.088 passed 00:10:35.088 Test: blockdev write read 8 blocks ...passed 00:10:35.088 Test: blockdev write read size > 128k ...passed 00:10:35.088 Test: blockdev write read invalid size ...passed 00:10:35.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:35.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:35.088 Test: blockdev write read max offset ...passed 00:10:35.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:35.088 Test: blockdev writev readv 8 blocks ...passed 00:10:35.088 Test: blockdev writev readv 30 x 1block ...passed 00:10:35.088 Test: blockdev writev readv block ...passed 00:10:35.088 Test: blockdev writev readv size > 128k ...passed 00:10:35.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:35.088 Test: blockdev comparev and writev ...[2024-10-07 12:20:58.306277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4a06000 len:0x1000 00:10:35.088 [2024-10-07 12:20:58.306351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:35.088 passed 00:10:35.088 Test: blockdev nvme passthru rw ...passed 00:10:35.088 Test: blockdev nvme passthru vendor specific ...[2024-10-07 12:20:58.307346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:35.088 [2024-10-07 12:20:58.307393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:35.088 passed 00:10:35.088 Test: blockdev nvme admin passthru ...passed 00:10:35.088 Test: blockdev copy ...passed 00:10:35.088 Suite: bdevio tests on: Nvme2n3 00:10:35.088 Test: blockdev write read block ...passed 00:10:35.088 Test: blockdev write zeroes read block ...passed 00:10:35.088 Test: blockdev write zeroes read no split ...passed 00:10:35.088 Test: blockdev write zeroes read split ...passed 00:10:35.347 Test: blockdev write zeroes read split partial ...passed 00:10:35.347 Test: blockdev reset ...[2024-10-07 12:20:58.397175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:35.347 [2024-10-07 12:20:58.402302] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:35.347 passed 00:10:35.347 Test: blockdev write read 8 blocks ...passed 00:10:35.347 Test: blockdev write read size > 128k ...passed 00:10:35.347 Test: blockdev write read invalid size ...passed 00:10:35.347 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:35.347 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:35.347 Test: blockdev write read max offset ...passed 00:10:35.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:35.347 Test: blockdev writev readv 8 blocks ...passed 00:10:35.347 Test: blockdev writev readv 30 x 1block ...passed 00:10:35.347 Test: blockdev writev readv block ...passed 00:10:35.347 Test: blockdev writev readv size > 128k ...passed 00:10:35.347 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:35.347 Test: blockdev comparev and writev ...[2024-10-07 12:20:58.411539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4c3c000 len:0x1000 00:10:35.347 [2024-10-07 12:20:58.411614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:35.347 passed 00:10:35.347 Test: blockdev nvme passthru rw ...passed 00:10:35.347 Test: blockdev nvme passthru vendor specific ...[2024-10-07 12:20:58.412459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:35.347 [2024-10-07 12:20:58.412496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:35.347 passed 00:10:35.347 Test: blockdev nvme admin passthru ...passed 00:10:35.347 Test: blockdev copy ...passed 00:10:35.347 Suite: bdevio tests on: Nvme2n2 00:10:35.347 Test: blockdev write read block ...passed 00:10:35.347 Test: blockdev write zeroes read block ...passed 00:10:35.347 Test: blockdev write zeroes read no split ...passed 00:10:35.347 Test: blockdev write zeroes read split ...passed 00:10:35.347 Test: blockdev write zeroes read split partial ...passed 00:10:35.347 Test: blockdev reset ...[2024-10-07 12:20:58.493416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:35.347 [2024-10-07 12:20:58.498422] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:35.347 passed 00:10:35.347 Test: blockdev write read 8 blocks ...passed 00:10:35.347 Test: blockdev write read size > 128k ...passed 00:10:35.347 Test: blockdev write read invalid size ...passed 00:10:35.347 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:35.347 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:35.347 Test: blockdev write read max offset ...passed 00:10:35.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:35.347 Test: blockdev writev readv 8 blocks ...passed 00:10:35.347 Test: blockdev writev readv 30 x 1block ...passed 00:10:35.347 Test: blockdev writev readv block ...passed 00:10:35.347 Test: blockdev writev readv size > 128k ...passed 00:10:35.347 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:35.347 Test: blockdev comparev and writev ...[2024-10-07 12:20:58.508341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4c36000 len:0x1000 00:10:35.347 [2024-10-07 12:20:58.508407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:35.347 passed 00:10:35.347 Test: blockdev nvme passthru rw ...passed 00:10:35.347 Test: blockdev nvme passthru vendor specific ...passed 00:10:35.347 Test: blockdev nvme admin passthru ...[2024-10-07 12:20:58.509378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:35.347 [2024-10-07 12:20:58.509412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:35.347 passed 00:10:35.347 Test: blockdev copy ...passed 00:10:35.347 Suite: bdevio tests on: Nvme2n1 00:10:35.347 Test: blockdev write read block ...passed 00:10:35.347 Test: blockdev write zeroes read block ...passed 00:10:35.347 Test: blockdev write zeroes read no split ...passed 00:10:35.347 Test: blockdev write zeroes read split ...passed 00:10:35.347 Test: blockdev write zeroes read split partial ...passed 00:10:35.348 Test: blockdev reset ...[2024-10-07 12:20:58.590766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:35.348 passed 00:10:35.348 Test: blockdev write read 8 blocks ...[2024-10-07 12:20:58.595566] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:35.348 passed 00:10:35.348 Test: blockdev write read size > 128k ...passed 00:10:35.348 Test: blockdev write read invalid size ...passed 00:10:35.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:35.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:35.348 Test: blockdev write read max offset ...passed 00:10:35.348 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:35.348 Test: blockdev writev readv 8 blocks ...passed 00:10:35.348 Test: blockdev writev readv 30 x 1block ...passed 00:10:35.348 Test: blockdev writev readv block ...passed 00:10:35.348 Test: blockdev writev readv size > 128k ...passed 00:10:35.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:35.348 Test: blockdev comparev and writev ...[2024-10-07 12:20:58.604563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4c32000 len:0x1000 00:10:35.348 [2024-10-07 12:20:58.604615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:35.348 passed 00:10:35.348 Test: blockdev nvme passthru rw ...passed 00:10:35.348 Test: blockdev nvme passthru vendor specific ...passed 00:10:35.348 Test: blockdev nvme admin passthru ...[2024-10-07 12:20:58.605579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:35.348 [2024-10-07 12:20:58.605613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:35.348 passed 00:10:35.348 Test: blockdev copy ...passed 00:10:35.348 Suite: bdevio tests on: Nvme1n1p2 00:10:35.348 Test: blockdev write read block ...passed 00:10:35.348 Test: blockdev write zeroes read block ...passed 00:10:35.348 Test: blockdev write zeroes read no split ...passed 00:10:35.606 Test: blockdev write zeroes read split ...passed 00:10:35.606 Test: blockdev write zeroes read split partial ...passed 00:10:35.606 Test: blockdev reset ...[2024-10-07 12:20:58.685255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:10:35.606 [2024-10-07 12:20:58.689680] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:35.606 passed 00:10:35.606 Test: blockdev write read 8 blocks ...passed 00:10:35.606 Test: blockdev write read size > 128k ...passed 00:10:35.606 Test: blockdev write read invalid size ...passed 00:10:35.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:35.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:35.606 Test: blockdev write read max offset ...passed 00:10:35.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:35.606 Test: blockdev writev readv 8 blocks ...passed 00:10:35.606 Test: blockdev writev readv 30 x 1block ...passed 00:10:35.606 Test: blockdev writev readv block ...passed 00:10:35.606 Test: blockdev writev readv size > 128k ...passed 00:10:35.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:35.606 Test: blockdev comparev and writev ...[2024-10-07 12:20:58.699399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c4c2e000 len:0x1000 00:10:35.606 [2024-10-07 12:20:58.699444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:35.606 passed 00:10:35.606 Test: blockdev nvme passthru rw ...passed 00:10:35.606 Test: blockdev nvme passthru vendor specific ...passed 00:10:35.606 Test: blockdev nvme admin passthru ...passed 00:10:35.606 Test: blockdev copy ...passed 00:10:35.606 Suite: bdevio tests on: Nvme1n1p1 00:10:35.606 Test: blockdev write read block ...passed 00:10:35.606 Test: blockdev write zeroes read block ...passed 00:10:35.606 Test: blockdev write zeroes read no split ...passed 00:10:35.606 Test: blockdev write zeroes read split ...passed 00:10:35.606 Test: blockdev write zeroes read split partial ...passed 00:10:35.606 Test: blockdev reset ...[2024-10-07 12:20:58.770835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:10:35.606 [2024-10-07 12:20:58.775119] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:35.606 passed 00:10:35.606 Test: blockdev write read 8 blocks ...passed 00:10:35.606 Test: blockdev write read size > 128k ...passed 00:10:35.606 Test: blockdev write read invalid size ...passed 00:10:35.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:35.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:35.606 Test: blockdev write read max offset ...passed 00:10:35.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:35.606 Test: blockdev writev readv 8 blocks ...passed 00:10:35.606 Test: blockdev writev readv 30 x 1block ...passed 00:10:35.606 Test: blockdev writev readv block ...passed 00:10:35.606 Test: blockdev writev readv size > 128k ...passed 00:10:35.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:35.606 Test: blockdev comparev and writev ...[2024-10-07 12:20:58.784757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:10:35.606 Test: blockdev nvme passthru rw ...passed 00:10:35.606 Test: blockdev nvme passthru vendor specific ...passed 00:10:35.606 Test: blockdev nvme admin passthru ...passed 00:10:35.606 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2ba40e000 len:0x1000 00:10:35.606 [2024-10-07 12:20:58.784939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:35.606 passed 00:10:35.606 Suite: bdevio tests on: Nvme0n1 00:10:35.606 Test: blockdev write read block ...passed 00:10:35.606 Test: blockdev write zeroes read block ...passed 00:10:35.606 Test: blockdev write zeroes read no split ...passed 00:10:35.606 Test: blockdev write zeroes read split ...passed 00:10:35.606 Test: blockdev write zeroes read split partial ...passed 00:10:35.606 Test: blockdev reset ...[2024-10-07 12:20:58.855709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:35.606 passed 00:10:35.607 Test: blockdev write read 8 blocks ...[2024-10-07 12:20:58.859896] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:35.607 passed 00:10:35.607 Test: blockdev write read size > 128k ...passed 00:10:35.607 Test: blockdev write read invalid size ...passed 00:10:35.607 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:35.607 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:35.607 Test: blockdev write read max offset ...passed 00:10:35.607 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:35.607 Test: blockdev writev readv 8 blocks ...passed 00:10:35.607 Test: blockdev writev readv 30 x 1block ...passed 00:10:35.607 Test: blockdev writev readv block ...passed 00:10:35.607 Test: blockdev writev readv size > 128k ...passed 00:10:35.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:35.607 Test: blockdev comparev and writev ...passed 00:10:35.607 Test: blockdev nvme passthru rw ...[2024-10-07 12:20:58.868332] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:35.607 separate metadata which is not supported yet. 00:10:35.607 passed 00:10:35.607 Test: blockdev nvme passthru vendor specific ...passed 00:10:35.607 Test: blockdev nvme admin passthru ...[2024-10-07 12:20:58.868969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:35.607 [2024-10-07 12:20:58.869016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:35.607 passed 00:10:35.607 Test: blockdev copy ...passed 00:10:35.607 00:10:35.607 Run Summary: Type Total Ran Passed Failed Inactive 00:10:35.607 suites 7 7 n/a 0 0 00:10:35.607 tests 161 161 161 0 0 00:10:35.607 asserts 1025 1025 1025 0 n/a 00:10:35.607 00:10:35.607 Elapsed time = 1.804 seconds 00:10:35.607 0 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62987 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 62987 ']' 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 62987 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62987 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62987' 00:10:35.865 killing process with pid 62987 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 62987 00:10:35.865 12:20:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 62987 00:10:36.799 12:21:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:36.799 00:10:36.799 real 0m3.117s 00:10:36.799 user 0m7.654s 00:10:36.799 sys 0m0.482s 00:10:36.799 ************************************ 00:10:36.799 END TEST bdev_bounds 00:10:36.799 ************************************ 00:10:36.799 12:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.799 12:21:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:37.058 12:21:00 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:37.058 12:21:00 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:37.058 12:21:00 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.058 12:21:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:37.058 ************************************ 00:10:37.058 START TEST bdev_nbd 00:10:37.058 ************************************ 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63052 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63052 /var/tmp/spdk-nbd.sock 00:10:37.058 12:21:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 63052 ']' 00:10:37.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:37.059 12:21:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:37.059 12:21:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.059 12:21:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:37.059 12:21:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.059 12:21:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:37.059 [2024-10-07 12:21:00.237013] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:37.059 [2024-10-07 12:21:00.237374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.317 [2024-10-07 12:21:00.411732] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.576 [2024-10-07 12:21:00.619988] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:38.144 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.404 1+0 records in 00:10:38.404 1+0 records out 00:10:38.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00085487 s, 4.8 MB/s 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:38.404 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.663 1+0 records in 00:10:38.663 1+0 records out 00:10:38.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741802 s, 5.5 MB/s 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:38.663 12:21:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.923 1+0 records in 00:10:38.923 1+0 records out 00:10:38.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793759 s, 5.2 MB/s 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:38.923 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:39.182 1+0 records in 00:10:39.182 1+0 records out 00:10:39.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691156 s, 5.9 MB/s 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:39.182 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:39.441 1+0 records in 00:10:39.441 1+0 records out 00:10:39.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862631 s, 4.7 MB/s 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:39.441 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:39.700 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:39.700 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:39.700 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:39.700 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:10:39.700 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:39.700 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:39.700 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:39.700 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:10:39.700 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:39.700 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:39.701 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:39.701 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:39.701 1+0 records in 00:10:39.701 1+0 records out 00:10:39.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977311 s, 4.2 MB/s 00:10:39.701 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.701 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:39.701 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.701 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:39.701 12:21:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:39.701 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:39.701 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:39.701 12:21:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:39.959 1+0 records in 00:10:39.959 1+0 records out 00:10:39.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00153488 s, 2.7 MB/s 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:39.959 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd0", 00:10:40.219 "bdev_name": "Nvme0n1" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd1", 00:10:40.219 "bdev_name": "Nvme1n1p1" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd2", 00:10:40.219 "bdev_name": "Nvme1n1p2" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd3", 00:10:40.219 "bdev_name": "Nvme2n1" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd4", 00:10:40.219 "bdev_name": "Nvme2n2" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd5", 00:10:40.219 "bdev_name": "Nvme2n3" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd6", 00:10:40.219 "bdev_name": "Nvme3n1" 00:10:40.219 } 00:10:40.219 ]' 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd0", 00:10:40.219 "bdev_name": "Nvme0n1" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd1", 00:10:40.219 "bdev_name": "Nvme1n1p1" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd2", 00:10:40.219 "bdev_name": "Nvme1n1p2" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd3", 00:10:40.219 "bdev_name": "Nvme2n1" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd4", 00:10:40.219 "bdev_name": "Nvme2n2" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd5", 00:10:40.219 "bdev_name": "Nvme2n3" 00:10:40.219 }, 00:10:40.219 { 00:10:40.219 "nbd_device": "/dev/nbd6", 00:10:40.219 "bdev_name": "Nvme3n1" 00:10:40.219 } 00:10:40.219 ]' 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.219 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:40.478 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:40.478 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:40.478 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:40.478 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.478 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.478 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:40.478 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.478 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.478 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.478 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:40.737 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:40.737 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:40.737 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:40.737 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.737 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.737 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:40.737 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.737 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.737 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.737 12:21:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.995 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:41.253 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:41.253 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:41.253 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:41.253 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.253 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.253 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:41.253 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.253 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.253 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.253 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:41.514 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:41.514 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:41.514 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:41.514 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.514 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.514 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:41.514 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.514 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.514 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.514 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.773 12:21:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:42.032 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:42.291 /dev/nbd0 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:42.291 1+0 records in 00:10:42.291 1+0 records out 00:10:42.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736081 s, 5.6 MB/s 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:42.291 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:10:42.550 /dev/nbd1 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:42.550 1+0 records in 00:10:42.550 1+0 records out 00:10:42.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668936 s, 6.1 MB/s 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:42.550 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:42.551 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:10:42.809 /dev/nbd10 00:10:42.809 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:42.809 12:21:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:42.809 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:10:42.809 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:42.809 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:42.809 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:42.809 12:21:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:10:42.809 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:42.809 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:42.809 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:42.810 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:42.810 1+0 records in 00:10:42.810 1+0 records out 00:10:42.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564628 s, 7.3 MB/s 00:10:42.810 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.810 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:42.810 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.810 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:42.810 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:42.810 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:42.810 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:42.810 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:43.068 /dev/nbd11 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.068 1+0 records in 00:10:43.068 1+0 records out 00:10:43.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817486 s, 5.0 MB/s 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:43.068 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:43.328 /dev/nbd12 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.328 1+0 records in 00:10:43.328 1+0 records out 00:10:43.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771797 s, 5.3 MB/s 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:43.328 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:43.597 /dev/nbd13 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.597 1+0 records in 00:10:43.597 1+0 records out 00:10:43.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121651 s, 3.4 MB/s 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:43.597 12:21:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:43.857 /dev/nbd14 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.857 1+0 records in 00:10:43.857 1+0 records out 00:10:43.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789604 s, 5.2 MB/s 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.857 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd0", 00:10:44.116 "bdev_name": "Nvme0n1" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd1", 00:10:44.116 "bdev_name": "Nvme1n1p1" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd10", 00:10:44.116 "bdev_name": "Nvme1n1p2" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd11", 00:10:44.116 "bdev_name": "Nvme2n1" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd12", 00:10:44.116 "bdev_name": "Nvme2n2" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd13", 00:10:44.116 "bdev_name": "Nvme2n3" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd14", 00:10:44.116 "bdev_name": "Nvme3n1" 00:10:44.116 } 00:10:44.116 ]' 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd0", 00:10:44.116 "bdev_name": "Nvme0n1" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd1", 00:10:44.116 "bdev_name": "Nvme1n1p1" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd10", 00:10:44.116 "bdev_name": "Nvme1n1p2" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd11", 00:10:44.116 "bdev_name": "Nvme2n1" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd12", 00:10:44.116 "bdev_name": "Nvme2n2" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd13", 00:10:44.116 "bdev_name": "Nvme2n3" 00:10:44.116 }, 00:10:44.116 { 00:10:44.116 "nbd_device": "/dev/nbd14", 00:10:44.116 "bdev_name": "Nvme3n1" 00:10:44.116 } 00:10:44.116 ]' 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:44.116 /dev/nbd1 00:10:44.116 /dev/nbd10 00:10:44.116 /dev/nbd11 00:10:44.116 /dev/nbd12 00:10:44.116 /dev/nbd13 00:10:44.116 /dev/nbd14' 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:44.116 /dev/nbd1 00:10:44.116 /dev/nbd10 00:10:44.116 /dev/nbd11 00:10:44.116 /dev/nbd12 00:10:44.116 /dev/nbd13 00:10:44.116 /dev/nbd14' 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:44.116 256+0 records in 00:10:44.116 256+0 records out 00:10:44.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626567 s, 167 MB/s 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.116 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:44.375 256+0 records in 00:10:44.375 256+0 records out 00:10:44.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145206 s, 7.2 MB/s 00:10:44.375 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.375 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:44.375 256+0 records in 00:10:44.375 256+0 records out 00:10:44.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150332 s, 7.0 MB/s 00:10:44.375 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.375 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:44.635 256+0 records in 00:10:44.635 256+0 records out 00:10:44.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151185 s, 6.9 MB/s 00:10:44.635 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.635 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:44.894 256+0 records in 00:10:44.894 256+0 records out 00:10:44.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150366 s, 7.0 MB/s 00:10:44.894 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.894 12:21:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:44.894 256+0 records in 00:10:44.894 256+0 records out 00:10:44.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152548 s, 6.9 MB/s 00:10:44.894 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:44.894 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:45.156 256+0 records in 00:10:45.156 256+0 records out 00:10:45.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146321 s, 7.2 MB/s 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:45.156 256+0 records in 00:10:45.156 256+0 records out 00:10:45.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147795 s, 7.1 MB/s 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.156 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.416 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:45.675 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:45.675 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:45.675 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:45.675 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.675 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.675 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:45.675 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:45.675 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.675 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.675 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:45.934 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:45.934 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:45.934 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:45.934 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.934 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.934 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:45.934 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:45.934 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.934 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.934 12:21:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:45.934 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:45.934 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:45.934 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:45.934 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.934 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.934 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:45.934 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:45.934 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.934 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.934 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:46.192 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:46.192 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:46.192 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:46.192 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.192 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.192 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:46.192 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:46.192 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.192 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.192 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:46.450 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:46.450 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:46.450 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:46.450 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.450 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.450 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:46.450 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:46.450 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.450 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.450 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:46.708 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:46.708 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:46.708 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:46.708 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.708 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.708 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:46.708 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:46.708 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.708 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.708 12:21:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.966 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:47.225 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:47.483 malloc_lvol_verify 00:10:47.483 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:47.743 c580b90c-007f-4fb6-a857-fc1835a04246 00:10:47.743 12:21:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:47.743 7f79e4f0-e29f-4558-9cec-d0e5a10a11bc 00:10:47.743 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:48.002 /dev/nbd0 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:48.002 mke2fs 1.47.0 (5-Feb-2023) 00:10:48.002 Discarding device blocks: 0/4096 done 00:10:48.002 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:48.002 00:10:48.002 Allocating group tables: 0/1 done 00:10:48.002 Writing inode tables: 0/1 done 00:10:48.002 Creating journal (1024 blocks): done 00:10:48.002 Writing superblocks and filesystem accounting information: 0/1 done 00:10:48.002 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.002 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63052 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 63052 ']' 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 63052 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63052 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.261 killing process with pid 63052 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63052' 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 63052 00:10:48.261 12:21:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 63052 00:10:49.637 12:21:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:49.637 00:10:49.637 real 0m12.694s 00:10:49.637 user 0m16.078s 00:10:49.637 sys 0m5.408s 00:10:49.637 12:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.637 12:21:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:49.637 ************************************ 00:10:49.637 END TEST bdev_nbd 00:10:49.637 ************************************ 00:10:49.637 12:21:12 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:49.637 12:21:12 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:10:49.638 12:21:12 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:10:49.638 12:21:12 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:49.638 skipping fio tests on NVMe due to multi-ns failures. 00:10:49.638 12:21:12 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:49.638 12:21:12 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:49.638 12:21:12 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:49.638 12:21:12 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.638 12:21:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:49.638 ************************************ 00:10:49.638 START TEST bdev_verify 00:10:49.638 ************************************ 00:10:49.638 12:21:12 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:49.896 [2024-10-07 12:21:12.999587] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:49.896 [2024-10-07 12:21:12.999698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63473 ] 00:10:49.896 [2024-10-07 12:21:13.172570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:50.155 [2024-10-07 12:21:13.379230] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.155 [2024-10-07 12:21:13.379286] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.093 Running I/O for 5 seconds... 00:10:53.406 18368.00 IOPS, 71.75 MiB/s [2024-10-07T12:21:17.634Z] 18432.00 IOPS, 72.00 MiB/s [2024-10-07T12:21:18.571Z] 18218.67 IOPS, 71.17 MiB/s [2024-10-07T12:21:19.509Z] 17920.00 IOPS, 70.00 MiB/s [2024-10-07T12:21:19.509Z] 18073.60 IOPS, 70.60 MiB/s 00:10:56.218 Latency(us) 00:10:56.218 [2024-10-07T12:21:19.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.218 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x0 length 0xbd0bd 00:10:56.218 Nvme0n1 : 5.09 1069.63 4.18 0.00 0.00 119063.12 12633.45 105278.71 00:10:56.218 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:56.218 Nvme0n1 : 5.06 1467.84 5.73 0.00 0.00 86956.38 19897.68 77906.25 00:10:56.218 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x0 length 0x4ff80 00:10:56.218 Nvme1n1p1 : 5.09 1069.05 4.18 0.00 0.00 118769.47 13791.51 100646.45 00:10:56.218 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:56.218 Nvme1n1p1 : 5.06 1467.41 5.73 0.00 0.00 86725.00 21582.14 72431.76 00:10:56.218 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x0 length 0x4ff7f 00:10:56.218 Nvme1n1p2 : 5.09 1068.79 4.17 0.00 0.00 118549.77 11896.49 101909.80 00:10:56.218 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:56.218 Nvme1n1p2 : 5.06 1466.98 5.73 0.00 0.00 86599.23 21582.14 69905.07 00:10:56.218 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x0 length 0x80000 00:10:56.218 Nvme2n1 : 5.10 1079.10 4.22 0.00 0.00 117592.19 8632.85 96856.42 00:10:56.218 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x80000 length 0x80000 00:10:56.218 Nvme2n1 : 5.06 1466.58 5.73 0.00 0.00 86496.72 21582.14 70747.30 00:10:56.218 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x0 length 0x80000 00:10:56.218 Nvme2n2 : 5.10 1078.86 4.21 0.00 0.00 117352.90 8843.41 97698.65 00:10:56.218 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x80000 length 0x80000 00:10:56.218 Nvme2n2 : 5.08 1474.91 5.76 0.00 0.00 85935.87 9001.33 74116.22 00:10:56.218 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x0 length 0x80000 00:10:56.218 Nvme2n3 : 5.10 1078.61 4.21 0.00 0.00 117072.25 8685.49 101909.80 00:10:56.218 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x80000 length 0x80000 00:10:56.218 Nvme2n3 : 5.08 1474.18 5.76 0.00 0.00 85859.44 10317.31 77064.02 00:10:56.218 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x0 length 0x20000 00:10:56.218 Nvme3n1 : 5.10 1078.36 4.21 0.00 0.00 116964.59 8580.22 105278.71 00:10:56.218 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:56.218 Verification LBA range: start 0x20000 length 0x20000 00:10:56.218 Nvme3n1 : 5.08 1473.77 5.76 0.00 0.00 85770.93 8053.82 77906.25 00:10:56.218 [2024-10-07T12:21:19.510Z] =================================================================================================================== 00:10:56.219 [2024-10-07T12:21:19.510Z] Total : 17814.07 69.59 0.00 0.00 99707.47 8053.82 105278.71 00:10:57.624 00:10:57.624 real 0m7.816s 00:10:57.624 user 0m14.184s 00:10:57.624 sys 0m0.332s 00:10:57.624 12:21:20 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.624 12:21:20 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:57.624 ************************************ 00:10:57.624 END TEST bdev_verify 00:10:57.624 ************************************ 00:10:57.624 12:21:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:57.624 12:21:20 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:57.624 12:21:20 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.624 12:21:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:57.624 ************************************ 00:10:57.624 START TEST bdev_verify_big_io 00:10:57.624 ************************************ 00:10:57.624 12:21:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:57.624 [2024-10-07 12:21:20.895806] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:57.624 [2024-10-07 12:21:20.895942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63583 ] 00:10:57.882 [2024-10-07 12:21:21.058461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:58.140 [2024-10-07 12:21:21.269455] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.140 [2024-10-07 12:21:21.269484] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.076 Running I/O for 5 seconds... 00:11:04.270 2443.00 IOPS, 152.69 MiB/s [2024-10-07T12:21:28.495Z] 3731.00 IOPS, 233.19 MiB/s [2024-10-07T12:21:28.495Z] 4528.00 IOPS, 283.00 MiB/s 00:11:05.204 Latency(us) 00:11:05.204 [2024-10-07T12:21:28.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.204 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:05.204 Verification LBA range: start 0x0 length 0xbd0b 00:11:05.204 Nvme0n1 : 5.65 104.69 6.54 0.00 0.00 1180158.57 13896.79 1340829.71 00:11:05.204 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:05.204 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:05.204 Nvme0n1 : 5.49 202.80 12.67 0.00 0.00 610885.04 25582.73 616512.15 00:11:05.204 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:05.204 Verification LBA range: start 0x0 length 0x4ff8 00:11:05.205 Nvme1n1p1 : 5.69 98.57 6.16 0.00 0.00 1193305.15 57271.62 1832691.87 00:11:05.205 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:05.205 Nvme1n1p1 : 5.49 205.45 12.84 0.00 0.00 594928.71 52218.24 539027.02 00:11:05.205 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x0 length 0x4ff7 00:11:05.205 Nvme1n1p2 : 5.76 115.65 7.23 0.00 0.00 983480.72 30320.27 1280189.17 00:11:05.205 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:05.205 Nvme1n1p2 : 5.55 208.03 13.00 0.00 0.00 579417.77 89276.35 613143.24 00:11:05.205 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x0 length 0x8000 00:11:05.205 Nvme2n1 : 5.76 118.91 7.43 0.00 0.00 935495.44 34110.30 1307140.52 00:11:05.205 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x8000 length 0x8000 00:11:05.205 Nvme2n1 : 5.50 209.63 13.10 0.00 0.00 568844.47 89276.35 586191.88 00:11:05.205 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x0 length 0x8000 00:11:05.205 Nvme2n2 : 5.92 140.96 8.81 0.00 0.00 762376.76 15475.97 1664245.92 00:11:05.205 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x8000 length 0x8000 00:11:05.205 Nvme2n2 : 5.56 218.70 13.67 0.00 0.00 540525.33 4974.42 599667.56 00:11:05.205 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x0 length 0x8000 00:11:05.205 Nvme2n3 : 6.06 181.88 11.37 0.00 0.00 573085.23 7685.35 2021351.33 00:11:05.205 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x8000 length 0x8000 00:11:05.205 Nvme2n3 : 5.57 225.70 14.11 0.00 0.00 516839.87 7580.07 613143.24 00:11:05.205 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x0 length 0x2000 00:11:05.205 Nvme3n1 : 6.22 262.51 16.41 0.00 0.00 387970.09 1256.76 1751837.82 00:11:05.205 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:05.205 Verification LBA range: start 0x2000 length 0x2000 00:11:05.205 Nvme3n1 : 5.57 225.47 14.09 0.00 0.00 508924.20 7843.26 623249.99 00:11:05.205 [2024-10-07T12:21:28.496Z] =================================================================================================================== 00:11:05.205 [2024-10-07T12:21:28.496Z] Total : 2518.96 157.43 0.00 0.00 640170.36 1256.76 2021351.33 00:11:07.750 00:11:07.750 real 0m9.667s 00:11:07.750 user 0m17.882s 00:11:07.750 sys 0m0.361s 00:11:07.750 ************************************ 00:11:07.750 END TEST bdev_verify_big_io 00:11:07.750 ************************************ 00:11:07.750 12:21:30 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.750 12:21:30 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.750 12:21:30 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:07.750 12:21:30 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:07.750 12:21:30 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.750 12:21:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:07.750 ************************************ 00:11:07.750 START TEST bdev_write_zeroes 00:11:07.750 ************************************ 00:11:07.750 12:21:30 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:07.750 [2024-10-07 12:21:30.638549] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:07.750 [2024-10-07 12:21:30.638844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63703 ] 00:11:07.750 [2024-10-07 12:21:30.811841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.750 [2024-10-07 12:21:31.008539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.684 Running I/O for 1 seconds... 00:11:09.619 73472.00 IOPS, 287.00 MiB/s 00:11:09.619 Latency(us) 00:11:09.619 [2024-10-07T12:21:32.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.619 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:09.619 Nvme0n1 : 1.02 10440.50 40.78 0.00 0.00 12226.15 10580.51 32846.96 00:11:09.619 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:09.619 Nvme1n1p1 : 1.02 10429.21 40.74 0.00 0.00 12224.28 10317.31 33689.19 00:11:09.619 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:09.619 Nvme1n1p2 : 1.03 10418.21 40.70 0.00 0.00 12200.02 10369.95 31583.61 00:11:09.619 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:09.619 Nvme2n1 : 1.03 10408.77 40.66 0.00 0.00 12193.75 10580.51 31794.17 00:11:09.619 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:09.619 Nvme2n2 : 1.03 10452.47 40.83 0.00 0.00 12074.13 6843.12 24635.22 00:11:09.619 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:09.619 Nvme2n3 : 1.03 10443.00 40.79 0.00 0.00 12044.35 6843.12 22108.53 00:11:09.619 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:09.619 Nvme3n1 : 1.03 10433.66 40.76 0.00 0.00 12022.66 7211.59 21897.97 00:11:09.619 [2024-10-07T12:21:32.910Z] =================================================================================================================== 00:11:09.619 [2024-10-07T12:21:32.910Z] Total : 73025.83 285.26 0.00 0.00 12140.52 6843.12 33689.19 00:11:10.998 00:11:10.998 real 0m3.443s 00:11:10.998 user 0m3.061s 00:11:10.998 sys 0m0.266s 00:11:10.998 12:21:33 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.998 12:21:33 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:10.998 ************************************ 00:11:10.998 END TEST bdev_write_zeroes 00:11:10.998 ************************************ 00:11:10.998 12:21:34 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:10.998 12:21:34 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:10.998 12:21:34 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.998 12:21:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:10.998 ************************************ 00:11:10.998 START TEST bdev_json_nonenclosed 00:11:10.998 ************************************ 00:11:10.999 12:21:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:10.999 [2024-10-07 12:21:34.165164] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:10.999 [2024-10-07 12:21:34.165306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63762 ] 00:11:11.257 [2024-10-07 12:21:34.341263] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.257 [2024-10-07 12:21:34.537504] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.257 [2024-10-07 12:21:34.537612] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:11.257 [2024-10-07 12:21:34.537635] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:11.257 [2024-10-07 12:21:34.537647] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:11.824 00:11:11.824 real 0m0.869s 00:11:11.824 user 0m0.605s 00:11:11.824 sys 0m0.158s 00:11:11.824 12:21:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.824 ************************************ 00:11:11.824 END TEST bdev_json_nonenclosed 00:11:11.824 ************************************ 00:11:11.824 12:21:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:11.824 12:21:34 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:11.824 12:21:34 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:11.824 12:21:34 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.824 12:21:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:11.824 ************************************ 00:11:11.824 START TEST bdev_json_nonarray 00:11:11.824 ************************************ 00:11:11.824 12:21:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:11.824 [2024-10-07 12:21:35.104793] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:11.824 [2024-10-07 12:21:35.104954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63794 ] 00:11:12.083 [2024-10-07 12:21:35.282081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.356 [2024-10-07 12:21:35.480718] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.356 [2024-10-07 12:21:35.480816] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:12.356 [2024-10-07 12:21:35.480839] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:12.356 [2024-10-07 12:21:35.480852] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:12.615 00:11:12.615 real 0m0.881s 00:11:12.615 user 0m0.618s 00:11:12.615 sys 0m0.157s 00:11:12.615 12:21:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.615 12:21:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:12.615 ************************************ 00:11:12.616 END TEST bdev_json_nonarray 00:11:12.616 ************************************ 00:11:12.875 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:11:12.875 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:11:12.875 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:12.875 12:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:12.875 12:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.875 12:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:12.875 ************************************ 00:11:12.875 START TEST bdev_gpt_uuid 00:11:12.875 ************************************ 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63825 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63825 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 63825 ']' 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.875 12:21:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:12.875 [2024-10-07 12:21:36.062124] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:12.875 [2024-10-07 12:21:36.062254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63825 ] 00:11:13.134 [2024-10-07 12:21:36.234996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.393 [2024-10-07 12:21:36.428549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:14.331 Some configs were skipped because the RPC state that can call them passed over. 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.331 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:14.589 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.589 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:11:14.589 { 00:11:14.589 "name": "Nvme1n1p1", 00:11:14.589 "aliases": [ 00:11:14.589 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:14.589 ], 00:11:14.589 "product_name": "GPT Disk", 00:11:14.589 "block_size": 4096, 00:11:14.589 "num_blocks": 655104, 00:11:14.589 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:14.589 "assigned_rate_limits": { 00:11:14.589 "rw_ios_per_sec": 0, 00:11:14.589 "rw_mbytes_per_sec": 0, 00:11:14.589 "r_mbytes_per_sec": 0, 00:11:14.589 "w_mbytes_per_sec": 0 00:11:14.589 }, 00:11:14.589 "claimed": false, 00:11:14.589 "zoned": false, 00:11:14.589 "supported_io_types": { 00:11:14.589 "read": true, 00:11:14.589 "write": true, 00:11:14.589 "unmap": true, 00:11:14.589 "flush": true, 00:11:14.589 "reset": true, 00:11:14.589 "nvme_admin": false, 00:11:14.589 "nvme_io": false, 00:11:14.589 "nvme_io_md": false, 00:11:14.589 "write_zeroes": true, 00:11:14.589 "zcopy": false, 00:11:14.589 "get_zone_info": false, 00:11:14.589 "zone_management": false, 00:11:14.589 "zone_append": false, 00:11:14.589 "compare": true, 00:11:14.589 "compare_and_write": false, 00:11:14.589 "abort": true, 00:11:14.589 "seek_hole": false, 00:11:14.589 "seek_data": false, 00:11:14.589 "copy": true, 00:11:14.589 "nvme_iov_md": false 00:11:14.589 }, 00:11:14.589 "driver_specific": { 00:11:14.589 "gpt": { 00:11:14.589 "base_bdev": "Nvme1n1", 00:11:14.589 "offset_blocks": 256, 00:11:14.589 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:14.589 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:14.589 "partition_name": "SPDK_TEST_first" 00:11:14.589 } 00:11:14.589 } 00:11:14.589 } 00:11:14.589 ]' 00:11:14.589 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:11:14.589 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:11:14.590 { 00:11:14.590 "name": "Nvme1n1p2", 00:11:14.590 "aliases": [ 00:11:14.590 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:14.590 ], 00:11:14.590 "product_name": "GPT Disk", 00:11:14.590 "block_size": 4096, 00:11:14.590 "num_blocks": 655103, 00:11:14.590 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:14.590 "assigned_rate_limits": { 00:11:14.590 "rw_ios_per_sec": 0, 00:11:14.590 "rw_mbytes_per_sec": 0, 00:11:14.590 "r_mbytes_per_sec": 0, 00:11:14.590 "w_mbytes_per_sec": 0 00:11:14.590 }, 00:11:14.590 "claimed": false, 00:11:14.590 "zoned": false, 00:11:14.590 "supported_io_types": { 00:11:14.590 "read": true, 00:11:14.590 "write": true, 00:11:14.590 "unmap": true, 00:11:14.590 "flush": true, 00:11:14.590 "reset": true, 00:11:14.590 "nvme_admin": false, 00:11:14.590 "nvme_io": false, 00:11:14.590 "nvme_io_md": false, 00:11:14.590 "write_zeroes": true, 00:11:14.590 "zcopy": false, 00:11:14.590 "get_zone_info": false, 00:11:14.590 "zone_management": false, 00:11:14.590 "zone_append": false, 00:11:14.590 "compare": true, 00:11:14.590 "compare_and_write": false, 00:11:14.590 "abort": true, 00:11:14.590 "seek_hole": false, 00:11:14.590 "seek_data": false, 00:11:14.590 "copy": true, 00:11:14.590 "nvme_iov_md": false 00:11:14.590 }, 00:11:14.590 "driver_specific": { 00:11:14.590 "gpt": { 00:11:14.590 "base_bdev": "Nvme1n1", 00:11:14.590 "offset_blocks": 655360, 00:11:14.590 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:14.590 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:14.590 "partition_name": "SPDK_TEST_second" 00:11:14.590 } 00:11:14.590 } 00:11:14.590 } 00:11:14.590 ]' 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:11:14.590 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63825 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 63825 ']' 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 63825 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63825 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.847 killing process with pid 63825 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63825' 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 63825 00:11:14.847 12:21:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 63825 00:11:17.419 00:11:17.419 real 0m4.537s 00:11:17.419 user 0m4.586s 00:11:17.419 sys 0m0.589s 00:11:17.419 12:21:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.419 12:21:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 ************************************ 00:11:17.419 END TEST bdev_gpt_uuid 00:11:17.419 ************************************ 00:11:17.419 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:11:17.419 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:11:17.419 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:11:17.419 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:17.419 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:17.419 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:17.419 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:17.419 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:17.419 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:18.244 Waiting for block devices as requested 00:11:18.245 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.245 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.504 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.504 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.776 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:23.776 12:21:46 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:23.776 12:21:46 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:24.035 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:24.035 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:24.035 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:24.035 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:24.035 12:21:47 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:24.035 00:11:24.035 real 1m7.617s 00:11:24.035 user 1m23.053s 00:11:24.035 sys 0m12.669s 00:11:24.035 12:21:47 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.035 12:21:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:24.035 ************************************ 00:11:24.035 END TEST blockdev_nvme_gpt 00:11:24.035 ************************************ 00:11:24.035 12:21:47 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:24.035 12:21:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:24.035 12:21:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.035 12:21:47 -- common/autotest_common.sh@10 -- # set +x 00:11:24.035 ************************************ 00:11:24.035 START TEST nvme 00:11:24.035 ************************************ 00:11:24.035 12:21:47 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:24.035 * Looking for test storage... 00:11:24.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:24.296 12:21:47 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:24.296 12:21:47 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:11:24.296 12:21:47 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:24.296 12:21:47 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:24.296 12:21:47 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.296 12:21:47 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.296 12:21:47 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.296 12:21:47 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.296 12:21:47 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.296 12:21:47 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.296 12:21:47 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.296 12:21:47 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.296 12:21:47 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.296 12:21:47 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.296 12:21:47 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.296 12:21:47 nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:24.296 12:21:47 nvme -- scripts/common.sh@345 -- # : 1 00:11:24.296 12:21:47 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.296 12:21:47 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.296 12:21:47 nvme -- scripts/common.sh@365 -- # decimal 1 00:11:24.296 12:21:47 nvme -- scripts/common.sh@353 -- # local d=1 00:11:24.296 12:21:47 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.296 12:21:47 nvme -- scripts/common.sh@355 -- # echo 1 00:11:24.296 12:21:47 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.296 12:21:47 nvme -- scripts/common.sh@366 -- # decimal 2 00:11:24.296 12:21:47 nvme -- scripts/common.sh@353 -- # local d=2 00:11:24.296 12:21:47 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.296 12:21:47 nvme -- scripts/common.sh@355 -- # echo 2 00:11:24.296 12:21:47 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.296 12:21:47 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.296 12:21:47 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.296 12:21:47 nvme -- scripts/common.sh@368 -- # return 0 00:11:24.296 12:21:47 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.296 12:21:47 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:24.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.296 --rc genhtml_branch_coverage=1 00:11:24.296 --rc genhtml_function_coverage=1 00:11:24.296 --rc genhtml_legend=1 00:11:24.296 --rc geninfo_all_blocks=1 00:11:24.296 --rc geninfo_unexecuted_blocks=1 00:11:24.296 00:11:24.296 ' 00:11:24.296 12:21:47 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:24.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.296 --rc genhtml_branch_coverage=1 00:11:24.296 --rc genhtml_function_coverage=1 00:11:24.296 --rc genhtml_legend=1 00:11:24.296 --rc geninfo_all_blocks=1 00:11:24.296 --rc geninfo_unexecuted_blocks=1 00:11:24.296 00:11:24.296 ' 00:11:24.296 12:21:47 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:24.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.296 --rc genhtml_branch_coverage=1 00:11:24.296 --rc genhtml_function_coverage=1 00:11:24.297 --rc genhtml_legend=1 00:11:24.297 --rc geninfo_all_blocks=1 00:11:24.297 --rc geninfo_unexecuted_blocks=1 00:11:24.297 00:11:24.297 ' 00:11:24.297 12:21:47 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:24.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.297 --rc genhtml_branch_coverage=1 00:11:24.297 --rc genhtml_function_coverage=1 00:11:24.297 --rc genhtml_legend=1 00:11:24.297 --rc geninfo_all_blocks=1 00:11:24.297 --rc geninfo_unexecuted_blocks=1 00:11:24.297 00:11:24.297 ' 00:11:24.297 12:21:47 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:24.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.806 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.806 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.806 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.806 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.806 12:21:49 nvme -- nvme/nvme.sh@79 -- # uname 00:11:26.064 12:21:49 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:26.064 12:21:49 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:26.064 12:21:49 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:26.064 12:21:49 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:26.064 12:21:49 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:11:26.064 12:21:49 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:11:26.064 12:21:49 nvme -- common/autotest_common.sh@1071 -- # stubpid=64487 00:11:26.064 Waiting for stub to ready for secondary processes... 00:11:26.064 12:21:49 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:11:26.064 12:21:49 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:26.064 12:21:49 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:26.064 12:21:49 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64487 ]] 00:11:26.064 12:21:49 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:11:26.064 [2024-10-07 12:21:49.162183] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:26.064 [2024-10-07 12:21:49.162315] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:27.000 12:21:50 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:27.000 12:21:50 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64487 ]] 00:11:27.000 12:21:50 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:11:27.000 [2024-10-07 12:21:50.178521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.259 [2024-10-07 12:21:50.379303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.259 [2024-10-07 12:21:50.379362] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.259 [2024-10-07 12:21:50.379403] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.259 [2024-10-07 12:21:50.397534] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:27.259 [2024-10-07 12:21:50.397573] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:27.259 [2024-10-07 12:21:50.409860] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:27.259 [2024-10-07 12:21:50.409994] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:27.259 [2024-10-07 12:21:50.412914] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:27.259 [2024-10-07 12:21:50.413100] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:27.259 [2024-10-07 12:21:50.413166] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:27.259 [2024-10-07 12:21:50.416071] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:27.259 [2024-10-07 12:21:50.416362] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:27.259 [2024-10-07 12:21:50.416455] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:27.259 [2024-10-07 12:21:50.419830] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:27.259 [2024-10-07 12:21:50.420084] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:27.259 [2024-10-07 12:21:50.420187] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:27.259 [2024-10-07 12:21:50.420304] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:27.259 [2024-10-07 12:21:50.420381] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:28.194 12:21:51 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:28.194 done. 00:11:28.194 12:21:51 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:11:28.194 12:21:51 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:28.194 12:21:51 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:11:28.194 12:21:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.194 12:21:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:28.194 ************************************ 00:11:28.194 START TEST nvme_reset 00:11:28.194 ************************************ 00:11:28.194 12:21:51 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:28.194 Initializing NVMe Controllers 00:11:28.194 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:28.194 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:28.194 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:28.194 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:28.194 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:28.194 00:11:28.194 real 0m0.314s 00:11:28.194 user 0m0.096s 00:11:28.194 sys 0m0.163s 00:11:28.194 12:21:51 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.194 12:21:51 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:28.194 ************************************ 00:11:28.194 END TEST nvme_reset 00:11:28.194 ************************************ 00:11:28.453 12:21:51 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:28.453 12:21:51 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:28.453 12:21:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.453 12:21:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:28.453 ************************************ 00:11:28.453 START TEST nvme_identify 00:11:28.453 ************************************ 00:11:28.453 12:21:51 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:11:28.453 12:21:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:28.453 12:21:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:28.453 12:21:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:28.453 12:21:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:28.453 12:21:51 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:28.453 12:21:51 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:11:28.453 12:21:51 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:28.453 12:21:51 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:28.453 12:21:51 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:28.453 12:21:51 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:28.453 12:21:51 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:28.453 12:21:51 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:28.717 [2024-10-07 12:21:51.850280] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 64521 terminated unexpected 00:11:28.717 ===================================================== 00:11:28.717 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:28.717 ===================================================== 00:11:28.717 Controller Capabilities/Features 00:11:28.717 ================================ 00:11:28.717 Vendor ID: 1b36 00:11:28.717 Subsystem Vendor ID: 1af4 00:11:28.717 Serial Number: 12340 00:11:28.717 Model Number: QEMU NVMe Ctrl 00:11:28.717 Firmware Version: 8.0.0 00:11:28.717 Recommended Arb Burst: 6 00:11:28.717 IEEE OUI Identifier: 00 54 52 00:11:28.717 Multi-path I/O 00:11:28.717 May have multiple subsystem ports: No 00:11:28.717 May have multiple controllers: No 00:11:28.717 Associated with SR-IOV VF: No 00:11:28.717 Max Data Transfer Size: 524288 00:11:28.717 Max Number of Namespaces: 256 00:11:28.717 Max Number of I/O Queues: 64 00:11:28.717 NVMe Specification Version (VS): 1.4 00:11:28.717 NVMe Specification Version (Identify): 1.4 00:11:28.717 Maximum Queue Entries: 2048 00:11:28.717 Contiguous Queues Required: Yes 00:11:28.717 Arbitration Mechanisms Supported 00:11:28.717 Weighted Round Robin: Not Supported 00:11:28.717 Vendor Specific: Not Supported 00:11:28.717 Reset Timeout: 7500 ms 00:11:28.717 Doorbell Stride: 4 bytes 00:11:28.717 NVM Subsystem Reset: Not Supported 00:11:28.717 Command Sets Supported 00:11:28.717 NVM Command Set: Supported 00:11:28.717 Boot Partition: Not Supported 00:11:28.717 Memory Page Size Minimum: 4096 bytes 00:11:28.717 Memory Page Size Maximum: 65536 bytes 00:11:28.717 Persistent Memory Region: Not Supported 00:11:28.717 Optional Asynchronous Events Supported 00:11:28.717 Namespace Attribute Notices: Supported 00:11:28.717 Firmware Activation Notices: Not Supported 00:11:28.717 ANA Change Notices: Not Supported 00:11:28.717 PLE Aggregate Log Change Notices: Not Supported 00:11:28.717 LBA Status Info Alert Notices: Not Supported 00:11:28.717 EGE Aggregate Log Change Notices: Not Supported 00:11:28.717 Normal NVM Subsystem Shutdown event: Not Supported 00:11:28.717 Zone Descriptor Change Notices: Not Supported 00:11:28.717 Discovery Log Change Notices: Not Supported 00:11:28.718 Controller Attributes 00:11:28.718 128-bit Host Identifier: Not Supported 00:11:28.718 Non-Operational Permissive Mode: Not Supported 00:11:28.718 NVM Sets: Not Supported 00:11:28.718 Read Recovery Levels: Not Supported 00:11:28.718 Endurance Groups: Not Supported 00:11:28.718 Predictable Latency Mode: Not Supported 00:11:28.718 Traffic Based Keep ALive: Not Supported 00:11:28.718 Namespace Granularity: Not Supported 00:11:28.718 SQ Associations: Not Supported 00:11:28.718 UUID List: Not Supported 00:11:28.718 Multi-Domain Subsystem: Not Supported 00:11:28.718 Fixed Capacity Management: Not Supported 00:11:28.718 Variable Capacity Management: Not Supported 00:11:28.718 Delete Endurance Group: Not Supported 00:11:28.718 Delete NVM Set: Not Supported 00:11:28.718 Extended LBA Formats Supported: Supported 00:11:28.718 Flexible Data Placement Supported: Not Supported 00:11:28.718 00:11:28.718 Controller Memory Buffer Support 00:11:28.718 ================================ 00:11:28.718 Supported: No 00:11:28.718 00:11:28.718 Persistent Memory Region Support 00:11:28.718 ================================ 00:11:28.718 Supported: No 00:11:28.718 00:11:28.718 Admin Command Set Attributes 00:11:28.718 ============================ 00:11:28.718 Security Send/Receive: Not Supported 00:11:28.718 Format NVM: Supported 00:11:28.718 Firmware Activate/Download: Not Supported 00:11:28.718 Namespace Management: Supported 00:11:28.718 Device Self-Test: Not Supported 00:11:28.718 Directives: Supported 00:11:28.718 NVMe-MI: Not Supported 00:11:28.718 Virtualization Management: Not Supported 00:11:28.718 Doorbell Buffer Config: Supported 00:11:28.718 Get LBA Status Capability: Not Supported 00:11:28.718 Command & Feature Lockdown Capability: Not Supported 00:11:28.718 Abort Command Limit: 4 00:11:28.718 Async Event Request Limit: 4 00:11:28.718 Number of Firmware Slots: N/A 00:11:28.718 Firmware Slot 1 Read-Only: N/A 00:11:28.718 Firmware Activation Without Reset: N/A 00:11:28.718 Multiple Update Detection Support: N/A 00:11:28.718 Firmware Update Granularity: No Information Provided 00:11:28.718 Per-Namespace SMART Log: Yes 00:11:28.718 Asymmetric Namespace Access Log Page: Not Supported 00:11:28.718 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:28.718 Command Effects Log Page: Supported 00:11:28.718 Get Log Page Extended Data: Supported 00:11:28.718 Telemetry Log Pages: Not Supported 00:11:28.718 Persistent Event Log Pages: Not Supported 00:11:28.718 Supported Log Pages Log Page: May Support 00:11:28.718 Commands Supported & Effects Log Page: Not Supported 00:11:28.718 Feature Identifiers & Effects Log Page:May Support 00:11:28.718 NVMe-MI Commands & Effects Log Page: May Support 00:11:28.718 Data Area 4 for Telemetry Log: Not Supported 00:11:28.718 Error Log Page Entries Supported: 1 00:11:28.718 Keep Alive: Not Supported 00:11:28.718 00:11:28.718 NVM Command Set Attributes 00:11:28.718 ========================== 00:11:28.718 Submission Queue Entry Size 00:11:28.718 Max: 64 00:11:28.718 Min: 64 00:11:28.718 Completion Queue Entry Size 00:11:28.718 Max: 16 00:11:28.718 Min: 16 00:11:28.718 Number of Namespaces: 256 00:11:28.718 Compare Command: Supported 00:11:28.718 Write Uncorrectable Command: Not Supported 00:11:28.718 Dataset Management Command: Supported 00:11:28.718 Write Zeroes Command: Supported 00:11:28.718 Set Features Save Field: Supported 00:11:28.718 Reservations: Not Supported 00:11:28.718 Timestamp: Supported 00:11:28.718 Copy: Supported 00:11:28.718 Volatile Write Cache: Present 00:11:28.718 Atomic Write Unit (Normal): 1 00:11:28.718 Atomic Write Unit (PFail): 1 00:11:28.718 Atomic Compare & Write Unit: 1 00:11:28.718 Fused Compare & Write: Not Supported 00:11:28.718 Scatter-Gather List 00:11:28.718 SGL Command Set: Supported 00:11:28.718 SGL Keyed: Not Supported 00:11:28.718 SGL Bit Bucket Descriptor: Not Supported 00:11:28.718 SGL Metadata Pointer: Not Supported 00:11:28.718 Oversized SGL: Not Supported 00:11:28.718 SGL Metadata Address: Not Supported 00:11:28.718 SGL Offset: Not Supported 00:11:28.718 Transport SGL Data Block: Not Supported 00:11:28.718 Replay Protected Memory Block: Not Supported 00:11:28.718 00:11:28.718 Firmware Slot Information 00:11:28.718 ========================= 00:11:28.718 Active slot: 1 00:11:28.718 Slot 1 Firmware Revision: 1.0 00:11:28.718 00:11:28.718 00:11:28.718 Commands Supported and Effects 00:11:28.718 ============================== 00:11:28.718 Admin Commands 00:11:28.718 -------------- 00:11:28.718 Delete I/O Submission Queue (00h): Supported 00:11:28.718 Create I/O Submission Queue (01h): Supported 00:11:28.718 Get Log Page (02h): Supported 00:11:28.718 Delete I/O Completion Queue (04h): Supported 00:11:28.718 Create I/O Completion Queue (05h): Supported 00:11:28.718 Identify (06h): Supported 00:11:28.718 Abort (08h): Supported 00:11:28.718 Set Features (09h): Supported 00:11:28.718 Get Features (0Ah): Supported 00:11:28.718 Asynchronous Event Request (0Ch): Supported 00:11:28.718 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:28.718 Directive Send (19h): Supported 00:11:28.718 Directive Receive (1Ah): Supported 00:11:28.718 Virtualization Management (1Ch): Supported 00:11:28.718 Doorbell Buffer Config (7Ch): Supported 00:11:28.718 Format NVM (80h): Supported LBA-Change 00:11:28.718 I/O Commands 00:11:28.718 ------------ 00:11:28.718 Flush (00h): Supported LBA-Change 00:11:28.718 Write (01h): Supported LBA-Change 00:11:28.718 Read (02h): Supported 00:11:28.718 Compare (05h): Supported 00:11:28.718 Write Zeroes (08h): Supported LBA-Change 00:11:28.718 Dataset Management (09h): Supported LBA-Change 00:11:28.718 Unknown (0Ch): Supported 00:11:28.718 Unknown (12h): Supported 00:11:28.718 Copy (19h): Supported LBA-Change 00:11:28.718 Unknown (1Dh): Supported LBA-Change 00:11:28.718 00:11:28.718 Error Log 00:11:28.718 ========= 00:11:28.718 00:11:28.718 Arbitration 00:11:28.718 =========== 00:11:28.718 Arbitration Burst: no limit 00:11:28.718 00:11:28.718 Power Management 00:11:28.718 ================ 00:11:28.718 Number of Power States: 1 00:11:28.718 Current Power State: Power State #0 00:11:28.718 Power State #0: 00:11:28.718 Max Power: 25.00 W 00:11:28.718 Non-Operational State: Operational 00:11:28.718 Entry Latency: 16 microseconds 00:11:28.718 Exit Latency: 4 microseconds 00:11:28.718 Relative Read Throughput: 0 00:11:28.718 Relative Read Latency: 0 00:11:28.718 Relative Write Throughput: 0 00:11:28.718 Relative Write Latency: 0 00:11:28.718 Idle Power[2024-10-07 12:21:51.851710] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 64521 terminated unexpected 00:11:28.718 : Not Reported 00:11:28.718 Active Power: Not Reported 00:11:28.718 Non-Operational Permissive Mode: Not Supported 00:11:28.718 00:11:28.718 Health Information 00:11:28.718 ================== 00:11:28.718 Critical Warnings: 00:11:28.718 Available Spare Space: OK 00:11:28.718 Temperature: OK 00:11:28.718 Device Reliability: OK 00:11:28.718 Read Only: No 00:11:28.718 Volatile Memory Backup: OK 00:11:28.718 Current Temperature: 323 Kelvin (50 Celsius) 00:11:28.718 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:28.718 Available Spare: 0% 00:11:28.718 Available Spare Threshold: 0% 00:11:28.718 Life Percentage Used: 0% 00:11:28.718 Data Units Read: 751 00:11:28.718 Data Units Written: 679 00:11:28.718 Host Read Commands: 32589 00:11:28.718 Host Write Commands: 32375 00:11:28.718 Controller Busy Time: 0 minutes 00:11:28.718 Power Cycles: 0 00:11:28.718 Power On Hours: 0 hours 00:11:28.718 Unsafe Shutdowns: 0 00:11:28.718 Unrecoverable Media Errors: 0 00:11:28.718 Lifetime Error Log Entries: 0 00:11:28.718 Warning Temperature Time: 0 minutes 00:11:28.718 Critical Temperature Time: 0 minutes 00:11:28.718 00:11:28.718 Number of Queues 00:11:28.718 ================ 00:11:28.718 Number of I/O Submission Queues: 64 00:11:28.718 Number of I/O Completion Queues: 64 00:11:28.718 00:11:28.718 ZNS Specific Controller Data 00:11:28.718 ============================ 00:11:28.718 Zone Append Size Limit: 0 00:11:28.718 00:11:28.718 00:11:28.718 Active Namespaces 00:11:28.718 ================= 00:11:28.718 Namespace ID:1 00:11:28.718 Error Recovery Timeout: Unlimited 00:11:28.718 Command Set Identifier: NVM (00h) 00:11:28.718 Deallocate: Supported 00:11:28.718 Deallocated/Unwritten Error: Supported 00:11:28.718 Deallocated Read Value: All 0x00 00:11:28.718 Deallocate in Write Zeroes: Not Supported 00:11:28.718 Deallocated Guard Field: 0xFFFF 00:11:28.718 Flush: Supported 00:11:28.718 Reservation: Not Supported 00:11:28.718 Metadata Transferred as: Separate Metadata Buffer 00:11:28.718 Namespace Sharing Capabilities: Private 00:11:28.718 Size (in LBAs): 1548666 (5GiB) 00:11:28.718 Capacity (in LBAs): 1548666 (5GiB) 00:11:28.718 Utilization (in LBAs): 1548666 (5GiB) 00:11:28.718 Thin Provisioning: Not Supported 00:11:28.718 Per-NS Atomic Units: No 00:11:28.718 Maximum Single Source Range Length: 128 00:11:28.718 Maximum Copy Length: 128 00:11:28.718 Maximum Source Range Count: 128 00:11:28.718 NGUID/EUI64 Never Reused: No 00:11:28.718 Namespace Write Protected: No 00:11:28.718 Number of LBA Formats: 8 00:11:28.718 Current LBA Format: LBA Format #07 00:11:28.718 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.718 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.718 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.718 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.718 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.718 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.718 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.718 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.718 00:11:28.718 NVM Specific Namespace Data 00:11:28.718 =========================== 00:11:28.718 Logical Block Storage Tag Mask: 0 00:11:28.718 Protection Information Capabilities: 00:11:28.718 16b Guard Protection Information Storage Tag Support: No 00:11:28.718 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.718 Storage Tag Check Read Support: No 00:11:28.718 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.718 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.718 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.718 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.718 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.718 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.718 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.718 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.718 ===================================================== 00:11:28.718 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:28.718 ===================================================== 00:11:28.718 Controller Capabilities/Features 00:11:28.718 ================================ 00:11:28.718 Vendor ID: 1b36 00:11:28.718 Subsystem Vendor ID: 1af4 00:11:28.718 Serial Number: 12341 00:11:28.718 Model Number: QEMU NVMe Ctrl 00:11:28.718 Firmware Version: 8.0.0 00:11:28.718 Recommended Arb Burst: 6 00:11:28.718 IEEE OUI Identifier: 00 54 52 00:11:28.718 Multi-path I/O 00:11:28.718 May have multiple subsystem ports: No 00:11:28.718 May have multiple controllers: No 00:11:28.718 Associated with SR-IOV VF: No 00:11:28.718 Max Data Transfer Size: 524288 00:11:28.718 Max Number of Namespaces: 256 00:11:28.718 Max Number of I/O Queues: 64 00:11:28.718 NVMe Specification Version (VS): 1.4 00:11:28.718 NVMe Specification Version (Identify): 1.4 00:11:28.718 Maximum Queue Entries: 2048 00:11:28.718 Contiguous Queues Required: Yes 00:11:28.718 Arbitration Mechanisms Supported 00:11:28.718 Weighted Round Robin: Not Supported 00:11:28.718 Vendor Specific: Not Supported 00:11:28.718 Reset Timeout: 7500 ms 00:11:28.718 Doorbell Stride: 4 bytes 00:11:28.718 NVM Subsystem Reset: Not Supported 00:11:28.718 Command Sets Supported 00:11:28.718 NVM Command Set: Supported 00:11:28.719 Boot Partition: Not Supported 00:11:28.719 Memory Page Size Minimum: 4096 bytes 00:11:28.719 Memory Page Size Maximum: 65536 bytes 00:11:28.719 Persistent Memory Region: Not Supported 00:11:28.719 Optional Asynchronous Events Supported 00:11:28.719 Namespace Attribute Notices: Supported 00:11:28.719 Firmware Activation Notices: Not Supported 00:11:28.719 ANA Change Notices: Not Supported 00:11:28.719 PLE Aggregate Log Change Notices: Not Supported 00:11:28.719 LBA Status Info Alert Notices: Not Supported 00:11:28.719 EGE Aggregate Log Change Notices: Not Supported 00:11:28.719 Normal NVM Subsystem Shutdown event: Not Supported 00:11:28.719 Zone Descriptor Change Notices: Not Supported 00:11:28.719 Discovery Log Change Notices: Not Supported 00:11:28.719 Controller Attributes 00:11:28.719 128-bit Host Identifier: Not Supported 00:11:28.719 Non-Operational Permissive Mode: Not Supported 00:11:28.719 NVM Sets: Not Supported 00:11:28.719 Read Recovery Levels: Not Supported 00:11:28.719 Endurance Groups: Not Supported 00:11:28.719 Predictable Latency Mode: Not Supported 00:11:28.719 Traffic Based Keep ALive: Not Supported 00:11:28.719 Namespace Granularity: Not Supported 00:11:28.719 SQ Associations: Not Supported 00:11:28.719 UUID List: Not Supported 00:11:28.719 Multi-Domain Subsystem: Not Supported 00:11:28.719 Fixed Capacity Management: Not Supported 00:11:28.719 Variable Capacity Management: Not Supported 00:11:28.719 Delete Endurance Group: Not Supported 00:11:28.719 Delete NVM Set: Not Supported 00:11:28.719 Extended LBA Formats Supported: Supported 00:11:28.719 Flexible Data Placement Supported: Not Supported 00:11:28.719 00:11:28.719 Controller Memory Buffer Support 00:11:28.719 ================================ 00:11:28.719 Supported: No 00:11:28.719 00:11:28.719 Persistent Memory Region Support 00:11:28.719 ================================ 00:11:28.719 Supported: No 00:11:28.719 00:11:28.719 Admin Command Set Attributes 00:11:28.719 ============================ 00:11:28.719 Security Send/Receive: Not Supported 00:11:28.719 Format NVM: Supported 00:11:28.719 Firmware Activate/Download: Not Supported 00:11:28.719 Namespace Management: Supported 00:11:28.719 Device Self-Test: Not Supported 00:11:28.719 Directives: Supported 00:11:28.719 NVMe-MI: Not Supported 00:11:28.719 Virtualization Management: Not Supported 00:11:28.719 Doorbell Buffer Config: Supported 00:11:28.719 Get LBA Status Capability: Not Supported 00:11:28.719 Command & Feature Lockdown Capability: Not Supported 00:11:28.719 Abort Command Limit: 4 00:11:28.719 Async Event Request Limit: 4 00:11:28.719 Number of Firmware Slots: N/A 00:11:28.719 Firmware Slot 1 Read-Only: N/A 00:11:28.719 Firmware Activation Without Reset: N/A 00:11:28.719 Multiple Update Detection Support: N/A 00:11:28.719 Firmware Update Granularity: No Information Provided 00:11:28.719 Per-Namespace SMART Log: Yes 00:11:28.719 Asymmetric Namespace Access Log Page: Not Supported 00:11:28.719 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:28.719 Command Effects Log Page: Supported 00:11:28.719 Get Log Page Extended Data: Supported 00:11:28.719 Telemetry Log Pages: Not Supported 00:11:28.719 Persistent Event Log Pages: Not Supported 00:11:28.719 Supported Log Pages Log Page: May Support 00:11:28.719 Commands Supported & Effects Log Page: Not Supported 00:11:28.719 Feature Identifiers & Effects Log Page:May Support 00:11:28.719 NVMe-MI Commands & Effects Log Page: May Support 00:11:28.719 Data Area 4 for Telemetry Log: Not Supported 00:11:28.719 Error Log Page Entries Supported: 1 00:11:28.719 Keep Alive: Not Supported 00:11:28.719 00:11:28.719 NVM Command Set Attributes 00:11:28.719 ========================== 00:11:28.719 Submission Queue Entry Size 00:11:28.719 Max: 64 00:11:28.719 Min: 64 00:11:28.719 Completion Queue Entry Size 00:11:28.719 Max: 16 00:11:28.719 Min: 16 00:11:28.719 Number of Namespaces: 256 00:11:28.719 Compare Command: Supported 00:11:28.719 Write Uncorrectable Command: Not Supported 00:11:28.719 Dataset Management Command: Supported 00:11:28.719 Write Zeroes Command: Supported 00:11:28.719 Set Features Save Field: Supported 00:11:28.719 Reservations: Not Supported 00:11:28.719 Timestamp: Supported 00:11:28.719 Copy: Supported 00:11:28.719 Volatile Write Cache: Present 00:11:28.719 Atomic Write Unit (Normal): 1 00:11:28.719 Atomic Write Unit (PFail): 1 00:11:28.719 Atomic Compare & Write Unit: 1 00:11:28.719 Fused Compare & Write: Not Supported 00:11:28.719 Scatter-Gather List 00:11:28.719 SGL Command Set: Supported 00:11:28.719 SGL Keyed: Not Supported 00:11:28.719 SGL Bit Bucket Descriptor: Not Supported 00:11:28.719 SGL Metadata Pointer: Not Supported 00:11:28.719 Oversized SGL: Not Supported 00:11:28.719 SGL Metadata Address: Not Supported 00:11:28.719 SGL Offset: Not Supported 00:11:28.719 Transport SGL Data Block: Not Supported 00:11:28.719 Replay Protected Memory Block: Not Supported 00:11:28.719 00:11:28.719 Firmware Slot Information 00:11:28.719 ========================= 00:11:28.719 Active slot: 1 00:11:28.719 Slot 1 Firmware Revision: 1.0 00:11:28.719 00:11:28.719 00:11:28.719 Commands Supported and Effects 00:11:28.719 ============================== 00:11:28.719 Admin Commands 00:11:28.719 -------------- 00:11:28.719 Delete I/O Submission Queue (00h): Supported 00:11:28.719 Create I/O Submission Queue (01h): Supported 00:11:28.719 Get Log Page (02h): Supported 00:11:28.719 Delete I/O Completion Queue (04h): Supported 00:11:28.719 Create I/O Completion Queue (05h): Supported 00:11:28.719 Identify (06h): Supported 00:11:28.719 Abort (08h): Supported 00:11:28.719 Set Features (09h): Supported 00:11:28.719 Get Features (0Ah): Supported 00:11:28.719 Asynchronous Event Request (0Ch): Supported 00:11:28.719 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:28.719 Directive Send (19h): Supported 00:11:28.719 Directive Receive (1Ah): Supported 00:11:28.719 Virtualization Management (1Ch): Supported 00:11:28.719 Doorbell Buffer Config (7Ch): Supported 00:11:28.719 Format NVM (80h): Supported LBA-Change 00:11:28.719 I/O Commands 00:11:28.719 ------------ 00:11:28.719 Flush (00h): Supported LBA-Change 00:11:28.719 Write (01h): Supported LBA-Change 00:11:28.719 Read (02h): Supported 00:11:28.719 Compare (05h): Supported 00:11:28.719 Write Zeroes (08h): Supported LBA-Change 00:11:28.719 Dataset Management (09h): Supported LBA-Change 00:11:28.719 Unknown (0Ch): Supported 00:11:28.719 Unknown (12h): Supported 00:11:28.719 Copy (19h): Supported LBA-Change 00:11:28.719 Unknown (1Dh): Supported LBA-Change 00:11:28.719 00:11:28.719 Error Log 00:11:28.719 ========= 00:11:28.719 00:11:28.719 Arbitration 00:11:28.719 =========== 00:11:28.719 Arbitration Burst: no limit 00:11:28.719 00:11:28.719 Power Management 00:11:28.719 ================ 00:11:28.719 Number of Power States: 1 00:11:28.719 Current Power State: Power State #0 00:11:28.719 Power State #0: 00:11:28.719 Max Power: 25.00 W 00:11:28.719 Non-Operational State: Operational 00:11:28.719 Entry Latency: 16 microseconds 00:11:28.719 Exit Latency: 4 microseconds 00:11:28.719 Relative Read Throughput: 0 00:11:28.719 Relative Read Latency: 0 00:11:28.719 Relative Write Throughput: 0 00:11:28.719 Relative Write Latency: 0 00:11:28.719 Idle Power: Not Reported 00:11:28.719 Active Power: Not Reported 00:11:28.719 Non-Operational Permissive Mode: Not Supported 00:11:28.719 00:11:28.719 Health Information 00:11:28.719 ================== 00:11:28.719 Critical Warnings: 00:11:28.719 Available Spare Space: OK 00:11:28.719 Temperature: [2024-10-07 12:21:51.852650] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 64521 terminated unexpected 00:11:28.719 OK 00:11:28.719 Device Reliability: OK 00:11:28.719 Read Only: No 00:11:28.719 Volatile Memory Backup: OK 00:11:28.719 Current Temperature: 323 Kelvin (50 Celsius) 00:11:28.719 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:28.719 Available Spare: 0% 00:11:28.719 Available Spare Threshold: 0% 00:11:28.719 Life Percentage Used: 0% 00:11:28.719 Data Units Read: 1157 00:11:28.719 Data Units Written: 1030 00:11:28.719 Host Read Commands: 48290 00:11:28.719 Host Write Commands: 47178 00:11:28.719 Controller Busy Time: 0 minutes 00:11:28.719 Power Cycles: 0 00:11:28.719 Power On Hours: 0 hours 00:11:28.719 Unsafe Shutdowns: 0 00:11:28.719 Unrecoverable Media Errors: 0 00:11:28.719 Lifetime Error Log Entries: 0 00:11:28.719 Warning Temperature Time: 0 minutes 00:11:28.719 Critical Temperature Time: 0 minutes 00:11:28.719 00:11:28.719 Number of Queues 00:11:28.719 ================ 00:11:28.719 Number of I/O Submission Queues: 64 00:11:28.719 Number of I/O Completion Queues: 64 00:11:28.719 00:11:28.719 ZNS Specific Controller Data 00:11:28.719 ============================ 00:11:28.719 Zone Append Size Limit: 0 00:11:28.719 00:11:28.719 00:11:28.719 Active Namespaces 00:11:28.719 ================= 00:11:28.719 Namespace ID:1 00:11:28.719 Error Recovery Timeout: Unlimited 00:11:28.719 Command Set Identifier: NVM (00h) 00:11:28.719 Deallocate: Supported 00:11:28.719 Deallocated/Unwritten Error: Supported 00:11:28.719 Deallocated Read Value: All 0x00 00:11:28.719 Deallocate in Write Zeroes: Not Supported 00:11:28.719 Deallocated Guard Field: 0xFFFF 00:11:28.719 Flush: Supported 00:11:28.719 Reservation: Not Supported 00:11:28.719 Namespace Sharing Capabilities: Private 00:11:28.719 Size (in LBAs): 1310720 (5GiB) 00:11:28.719 Capacity (in LBAs): 1310720 (5GiB) 00:11:28.719 Utilization (in LBAs): 1310720 (5GiB) 00:11:28.719 Thin Provisioning: Not Supported 00:11:28.719 Per-NS Atomic Units: No 00:11:28.719 Maximum Single Source Range Length: 128 00:11:28.719 Maximum Copy Length: 128 00:11:28.719 Maximum Source Range Count: 128 00:11:28.719 NGUID/EUI64 Never Reused: No 00:11:28.719 Namespace Write Protected: No 00:11:28.719 Number of LBA Formats: 8 00:11:28.719 Current LBA Format: LBA Format #04 00:11:28.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.719 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.719 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.719 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.719 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.719 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.719 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.719 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.719 00:11:28.719 NVM Specific Namespace Data 00:11:28.719 =========================== 00:11:28.719 Logical Block Storage Tag Mask: 0 00:11:28.719 Protection Information Capabilities: 00:11:28.719 16b Guard Protection Information Storage Tag Support: No 00:11:28.719 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.719 Storage Tag Check Read Support: No 00:11:28.719 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.719 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.719 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.719 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.719 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.719 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.719 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.719 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.719 ===================================================== 00:11:28.719 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:28.719 ===================================================== 00:11:28.719 Controller Capabilities/Features 00:11:28.719 ================================ 00:11:28.719 Vendor ID: 1b36 00:11:28.719 Subsystem Vendor ID: 1af4 00:11:28.719 Serial Number: 12343 00:11:28.719 Model Number: QEMU NVMe Ctrl 00:11:28.719 Firmware Version: 8.0.0 00:11:28.719 Recommended Arb Burst: 6 00:11:28.719 IEEE OUI Identifier: 00 54 52 00:11:28.719 Multi-path I/O 00:11:28.719 May have multiple subsystem ports: No 00:11:28.719 May have multiple controllers: Yes 00:11:28.719 Associated with SR-IOV VF: No 00:11:28.719 Max Data Transfer Size: 524288 00:11:28.719 Max Number of Namespaces: 256 00:11:28.719 Max Number of I/O Queues: 64 00:11:28.719 NVMe Specification Version (VS): 1.4 00:11:28.719 NVMe Specification Version (Identify): 1.4 00:11:28.719 Maximum Queue Entries: 2048 00:11:28.719 Contiguous Queues Required: Yes 00:11:28.720 Arbitration Mechanisms Supported 00:11:28.720 Weighted Round Robin: Not Supported 00:11:28.720 Vendor Specific: Not Supported 00:11:28.720 Reset Timeout: 7500 ms 00:11:28.720 Doorbell Stride: 4 bytes 00:11:28.720 NVM Subsystem Reset: Not Supported 00:11:28.720 Command Sets Supported 00:11:28.720 NVM Command Set: Supported 00:11:28.720 Boot Partition: Not Supported 00:11:28.720 Memory Page Size Minimum: 4096 bytes 00:11:28.720 Memory Page Size Maximum: 65536 bytes 00:11:28.720 Persistent Memory Region: Not Supported 00:11:28.720 Optional Asynchronous Events Supported 00:11:28.720 Namespace Attribute Notices: Supported 00:11:28.720 Firmware Activation Notices: Not Supported 00:11:28.720 ANA Change Notices: Not Supported 00:11:28.720 PLE Aggregate Log Change Notices: Not Supported 00:11:28.720 LBA Status Info Alert Notices: Not Supported 00:11:28.720 EGE Aggregate Log Change Notices: Not Supported 00:11:28.720 Normal NVM Subsystem Shutdown event: Not Supported 00:11:28.720 Zone Descriptor Change Notices: Not Supported 00:11:28.720 Discovery Log Change Notices: Not Supported 00:11:28.720 Controller Attributes 00:11:28.720 128-bit Host Identifier: Not Supported 00:11:28.720 Non-Operational Permissive Mode: Not Supported 00:11:28.720 NVM Sets: Not Supported 00:11:28.720 Read Recovery Levels: Not Supported 00:11:28.720 Endurance Groups: Supported 00:11:28.720 Predictable Latency Mode: Not Supported 00:11:28.720 Traffic Based Keep ALive: Not Supported 00:11:28.720 Namespace Granularity: Not Supported 00:11:28.720 SQ Associations: Not Supported 00:11:28.720 UUID List: Not Supported 00:11:28.720 Multi-Domain Subsystem: Not Supported 00:11:28.720 Fixed Capacity Management: Not Supported 00:11:28.720 Variable Capacity Management: Not Supported 00:11:28.720 Delete Endurance Group: Not Supported 00:11:28.720 Delete NVM Set: Not Supported 00:11:28.720 Extended LBA Formats Supported: Supported 00:11:28.720 Flexible Data Placement Supported: Supported 00:11:28.720 00:11:28.720 Controller Memory Buffer Support 00:11:28.720 ================================ 00:11:28.720 Supported: No 00:11:28.720 00:11:28.720 Persistent Memory Region Support 00:11:28.720 ================================ 00:11:28.720 Supported: No 00:11:28.720 00:11:28.720 Admin Command Set Attributes 00:11:28.720 ============================ 00:11:28.720 Security Send/Receive: Not Supported 00:11:28.720 Format NVM: Supported 00:11:28.720 Firmware Activate/Download: Not Supported 00:11:28.720 Namespace Management: Supported 00:11:28.720 Device Self-Test: Not Supported 00:11:28.720 Directives: Supported 00:11:28.720 NVMe-MI: Not Supported 00:11:28.720 Virtualization Management: Not Supported 00:11:28.720 Doorbell Buffer Config: Supported 00:11:28.720 Get LBA Status Capability: Not Supported 00:11:28.720 Command & Feature Lockdown Capability: Not Supported 00:11:28.720 Abort Command Limit: 4 00:11:28.720 Async Event Request Limit: 4 00:11:28.720 Number of Firmware Slots: N/A 00:11:28.720 Firmware Slot 1 Read-Only: N/A 00:11:28.720 Firmware Activation Without Reset: N/A 00:11:28.720 Multiple Update Detection Support: N/A 00:11:28.720 Firmware Update Granularity: No Information Provided 00:11:28.720 Per-Namespace SMART Log: Yes 00:11:28.720 Asymmetric Namespace Access Log Page: Not Supported 00:11:28.720 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:28.720 Command Effects Log Page: Supported 00:11:28.720 Get Log Page Extended Data: Supported 00:11:28.720 Telemetry Log Pages: Not Supported 00:11:28.720 Persistent Event Log Pages: Not Supported 00:11:28.720 Supported Log Pages Log Page: May Support 00:11:28.720 Commands Supported & Effects Log Page: Not Supported 00:11:28.720 Feature Identifiers & Effects Log Page:May Support 00:11:28.720 NVMe-MI Commands & Effects Log Page: May Support 00:11:28.720 Data Area 4 for Telemetry Log: Not Supported 00:11:28.720 Error Log Page Entries Supported: 1 00:11:28.720 Keep Alive: Not Supported 00:11:28.720 00:11:28.720 NVM Command Set Attributes 00:11:28.720 ========================== 00:11:28.720 Submission Queue Entry Size 00:11:28.720 Max: 64 00:11:28.720 Min: 64 00:11:28.720 Completion Queue Entry Size 00:11:28.720 Max: 16 00:11:28.720 Min: 16 00:11:28.720 Number of Namespaces: 256 00:11:28.720 Compare Command: Supported 00:11:28.720 Write Uncorrectable Command: Not Supported 00:11:28.720 Dataset Management Command: Supported 00:11:28.720 Write Zeroes Command: Supported 00:11:28.720 Set Features Save Field: Supported 00:11:28.720 Reservations: Not Supported 00:11:28.720 Timestamp: Supported 00:11:28.720 Copy: Supported 00:11:28.720 Volatile Write Cache: Present 00:11:28.720 Atomic Write Unit (Normal): 1 00:11:28.720 Atomic Write Unit (PFail): 1 00:11:28.720 Atomic Compare & Write Unit: 1 00:11:28.720 Fused Compare & Write: Not Supported 00:11:28.720 Scatter-Gather List 00:11:28.720 SGL Command Set: Supported 00:11:28.720 SGL Keyed: Not Supported 00:11:28.720 SGL Bit Bucket Descriptor: Not Supported 00:11:28.720 SGL Metadata Pointer: Not Supported 00:11:28.720 Oversized SGL: Not Supported 00:11:28.720 SGL Metadata Address: Not Supported 00:11:28.720 SGL Offset: Not Supported 00:11:28.720 Transport SGL Data Block: Not Supported 00:11:28.720 Replay Protected Memory Block: Not Supported 00:11:28.720 00:11:28.720 Firmware Slot Information 00:11:28.720 ========================= 00:11:28.720 Active slot: 1 00:11:28.720 Slot 1 Firmware Revision: 1.0 00:11:28.720 00:11:28.720 00:11:28.720 Commands Supported and Effects 00:11:28.720 ============================== 00:11:28.720 Admin Commands 00:11:28.720 -------------- 00:11:28.720 Delete I/O Submission Queue (00h): Supported 00:11:28.720 Create I/O Submission Queue (01h): Supported 00:11:28.720 Get Log Page (02h): Supported 00:11:28.720 Delete I/O Completion Queue (04h): Supported 00:11:28.720 Create I/O Completion Queue (05h): Supported 00:11:28.720 Identify (06h): Supported 00:11:28.720 Abort (08h): Supported 00:11:28.720 Set Features (09h): Supported 00:11:28.720 Get Features (0Ah): Supported 00:11:28.720 Asynchronous Event Request (0Ch): Supported 00:11:28.720 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:28.720 Directive Send (19h): Supported 00:11:28.720 Directive Receive (1Ah): Supported 00:11:28.720 Virtualization Management (1Ch): Supported 00:11:28.720 Doorbell Buffer Config (7Ch): Supported 00:11:28.720 Format NVM (80h): Supported LBA-Change 00:11:28.720 I/O Commands 00:11:28.720 ------------ 00:11:28.720 Flush (00h): Supported LBA-Change 00:11:28.720 Write (01h): Supported LBA-Change 00:11:28.720 Read (02h): Supported 00:11:28.720 Compare (05h): Supported 00:11:28.720 Write Zeroes (08h): Supported LBA-Change 00:11:28.720 Dataset Management (09h): Supported LBA-Change 00:11:28.720 Unknown (0Ch): Supported 00:11:28.720 Unknown (12h): Supported 00:11:28.720 Copy (19h): Supported LBA-Change 00:11:28.720 Unknown (1Dh): Supported LBA-Change 00:11:28.720 00:11:28.720 Error Log 00:11:28.720 ========= 00:11:28.720 00:11:28.720 Arbitration 00:11:28.720 =========== 00:11:28.720 Arbitration Burst: no limit 00:11:28.720 00:11:28.720 Power Management 00:11:28.720 ================ 00:11:28.720 Number of Power States: 1 00:11:28.720 Current Power State: Power State #0 00:11:28.720 Power State #0: 00:11:28.720 Max Power: 25.00 W 00:11:28.720 Non-Operational State: Operational 00:11:28.720 Entry Latency: 16 microseconds 00:11:28.720 Exit Latency: 4 microseconds 00:11:28.720 Relative Read Throughput: 0 00:11:28.720 Relative Read Latency: 0 00:11:28.720 Relative Write Throughput: 0 00:11:28.720 Relative Write Latency: 0 00:11:28.720 Idle Power: Not Reported 00:11:28.720 Active Power: Not Reported 00:11:28.720 Non-Operational Permissive Mode: Not Supported 00:11:28.720 00:11:28.720 Health Information 00:11:28.720 ================== 00:11:28.720 Critical Warnings: 00:11:28.720 Available Spare Space: OK 00:11:28.720 Temperature: OK 00:11:28.720 Device Reliability: OK 00:11:28.720 Read Only: No 00:11:28.720 Volatile Memory Backup: OK 00:11:28.720 Current Temperature: 323 Kelvin (50 Celsius) 00:11:28.720 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:28.720 Available Spare: 0% 00:11:28.720 Available Spare Threshold: 0% 00:11:28.720 Life Percentage Used: 0% 00:11:28.720 Data Units Read: 1070 00:11:28.720 Data Units Written: 999 00:11:28.720 Host Read Commands: 35319 00:11:28.720 Host Write Commands: 34745 00:11:28.720 Controller Busy Time: 0 minutes 00:11:28.720 Power Cycles: 0 00:11:28.720 Power On Hours: 0 hours 00:11:28.720 Unsafe Shutdowns: 0 00:11:28.720 Unrecoverable Media Errors: 0 00:11:28.720 Lifetime Error Log Entries: 0 00:11:28.720 Warning Temperature Time: 0 minutes 00:11:28.720 Critical Temperature Time: 0 minutes 00:11:28.720 00:11:28.720 Number of Queues 00:11:28.720 ================ 00:11:28.720 Number of I/O Submission Queues: 64 00:11:28.720 Number of I/O Completion Queues: 64 00:11:28.720 00:11:28.720 ZNS Specific Controller Data 00:11:28.720 ============================ 00:11:28.720 Zone Append Size Limit: 0 00:11:28.720 00:11:28.720 00:11:28.720 Active Namespaces 00:11:28.720 ================= 00:11:28.720 Namespace ID:1 00:11:28.720 Error Recovery Timeout: Unlimited 00:11:28.720 Command Set Identifier: NVM (00h) 00:11:28.720 Deallocate: Supported 00:11:28.720 Deallocated/Unwritten Error: Supported 00:11:28.720 Deallocated Read Value: All 0x00 00:11:28.720 Deallocate in Write Zeroes: Not Supported 00:11:28.720 Deallocated Guard Field: 0xFFFF 00:11:28.720 Flush: Supported 00:11:28.720 Reservation: Not Supported 00:11:28.720 Namespace Sharing Capabilities: Multiple Controllers 00:11:28.720 Size (in LBAs): 262144 (1GiB) 00:11:28.720 Capacity (in LBAs): 262144 (1GiB) 00:11:28.720 Utilization (in LBAs): 262144 (1GiB) 00:11:28.720 Thin Provisioning: Not Supported 00:11:28.720 Per-NS Atomic Units: No 00:11:28.720 Maximum Single Source Range Length: 128 00:11:28.720 Maximum Copy Length: 128 00:11:28.720 Maximum Source Range Count: 128 00:11:28.720 NGUID/EUI64 Never Reused: No 00:11:28.720 Namespace Write Protected: No 00:11:28.720 Endurance group ID: 1 00:11:28.720 Number of LBA Formats: 8 00:11:28.720 Current LBA Format: LBA Format #04 00:11:28.720 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.720 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.720 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.720 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.720 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.720 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.720 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.720 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.720 00:11:28.720 Get Feature FDP: 00:11:28.720 ================ 00:11:28.720 Enabled: Yes 00:11:28.720 FDP configuration index: 0 00:11:28.720 00:11:28.720 FDP configurations log page 00:11:28.720 =========================== 00:11:28.720 Number of FDP configurations: 1 00:11:28.720 Version: 0 00:11:28.720 Size: 112 00:11:28.720 FDP Configuration Descriptor: 0 00:11:28.720 Descriptor Size: 96 00:11:28.720 Reclaim Group Identifier format: 2 00:11:28.720 FDP Volatile Write Cache: Not Present 00:11:28.720 FDP Configuration: Valid 00:11:28.720 Vendor Specific Size: 0 00:11:28.720 Number of Reclaim Groups: 2 00:11:28.720 Number of Recalim Unit Handles: 8 00:11:28.720 Max Placement Identifiers: 128 00:11:28.720 Number of Namespaces Suppprted: 256 00:11:28.720 Reclaim unit Nominal Size: 6000000 bytes 00:11:28.720 Estimated Reclaim Unit Time Limit: Not Reported 00:11:28.720 RUH Desc #000: RUH Type: Initially Isolated 00:11:28.720 RUH Desc #001: RUH Type: Initially Isolated 00:11:28.720 RUH Desc #002: RUH Type: Initially Isolated 00:11:28.720 RUH Desc #003: RUH Type: Initially Isolated 00:11:28.720 RUH Desc #004: RUH Type: Initially Isolated 00:11:28.720 RUH Desc #005: RUH Type: Initially Isolated 00:11:28.720 RUH Desc #006: RUH Type: Initially Isolated 00:11:28.720 RUH Desc #007: RUH Type: Initially Isolated 00:11:28.720 00:11:28.720 FDP reclaim unit handle usage log page 00:11:28.720 ====================================== 00:11:28.720 Number of Reclaim Unit Handles: 8 00:11:28.720 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:28.721 RUH Usage Desc #001: RUH Attributes: Unused 00:11:28.721 RUH Usage Desc #002: RUH Attributes: Unused 00:11:28.721 RUH Usage Desc #003: RUH Attributes: Unused 00:11:28.721 RUH Usage Desc #004: RUH Attributes: Unused 00:11:28.721 RUH Usage Desc #005: RUH Attributes: Unused 00:11:28.721 RUH Usage Desc #006: RUH Attributes: Unused 00:11:28.721 RUH Usage Desc #007: RUH Attributes: Unused 00:11:28.721 00:11:28.721 FDP statistics log page 00:11:28.721 ======================= 00:11:28.721 Host bytes with metadata written: 626237440 00:11:28.721 Me[2024-10-07 12:21:51.854145] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 64521 terminated unexpected 00:11:28.721 dia bytes with metadata written: 626319360 00:11:28.721 Media bytes erased: 0 00:11:28.721 00:11:28.721 FDP events log page 00:11:28.721 =================== 00:11:28.721 Number of FDP events: 0 00:11:28.721 00:11:28.721 NVM Specific Namespace Data 00:11:28.721 =========================== 00:11:28.721 Logical Block Storage Tag Mask: 0 00:11:28.721 Protection Information Capabilities: 00:11:28.721 16b Guard Protection Information Storage Tag Support: No 00:11:28.721 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.721 Storage Tag Check Read Support: No 00:11:28.721 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.721 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.721 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.721 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.721 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.721 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.721 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.721 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.721 ===================================================== 00:11:28.721 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:28.721 ===================================================== 00:11:28.721 Controller Capabilities/Features 00:11:28.721 ================================ 00:11:28.721 Vendor ID: 1b36 00:11:28.721 Subsystem Vendor ID: 1af4 00:11:28.721 Serial Number: 12342 00:11:28.721 Model Number: QEMU NVMe Ctrl 00:11:28.721 Firmware Version: 8.0.0 00:11:28.721 Recommended Arb Burst: 6 00:11:28.721 IEEE OUI Identifier: 00 54 52 00:11:28.721 Multi-path I/O 00:11:28.721 May have multiple subsystem ports: No 00:11:28.721 May have multiple controllers: No 00:11:28.721 Associated with SR-IOV VF: No 00:11:28.721 Max Data Transfer Size: 524288 00:11:28.721 Max Number of Namespaces: 256 00:11:28.721 Max Number of I/O Queues: 64 00:11:28.721 NVMe Specification Version (VS): 1.4 00:11:28.721 NVMe Specification Version (Identify): 1.4 00:11:28.721 Maximum Queue Entries: 2048 00:11:28.721 Contiguous Queues Required: Yes 00:11:28.721 Arbitration Mechanisms Supported 00:11:28.721 Weighted Round Robin: Not Supported 00:11:28.721 Vendor Specific: Not Supported 00:11:28.721 Reset Timeout: 7500 ms 00:11:28.721 Doorbell Stride: 4 bytes 00:11:28.721 NVM Subsystem Reset: Not Supported 00:11:28.721 Command Sets Supported 00:11:28.721 NVM Command Set: Supported 00:11:28.721 Boot Partition: Not Supported 00:11:28.721 Memory Page Size Minimum: 4096 bytes 00:11:28.721 Memory Page Size Maximum: 65536 bytes 00:11:28.721 Persistent Memory Region: Not Supported 00:11:28.721 Optional Asynchronous Events Supported 00:11:28.721 Namespace Attribute Notices: Supported 00:11:28.721 Firmware Activation Notices: Not Supported 00:11:28.721 ANA Change Notices: Not Supported 00:11:28.721 PLE Aggregate Log Change Notices: Not Supported 00:11:28.721 LBA Status Info Alert Notices: Not Supported 00:11:28.721 EGE Aggregate Log Change Notices: Not Supported 00:11:28.721 Normal NVM Subsystem Shutdown event: Not Supported 00:11:28.721 Zone Descriptor Change Notices: Not Supported 00:11:28.721 Discovery Log Change Notices: Not Supported 00:11:28.721 Controller Attributes 00:11:28.721 128-bit Host Identifier: Not Supported 00:11:28.721 Non-Operational Permissive Mode: Not Supported 00:11:28.721 NVM Sets: Not Supported 00:11:28.721 Read Recovery Levels: Not Supported 00:11:28.721 Endurance Groups: Not Supported 00:11:28.721 Predictable Latency Mode: Not Supported 00:11:28.721 Traffic Based Keep ALive: Not Supported 00:11:28.721 Namespace Granularity: Not Supported 00:11:28.721 SQ Associations: Not Supported 00:11:28.721 UUID List: Not Supported 00:11:28.721 Multi-Domain Subsystem: Not Supported 00:11:28.721 Fixed Capacity Management: Not Supported 00:11:28.721 Variable Capacity Management: Not Supported 00:11:28.721 Delete Endurance Group: Not Supported 00:11:28.721 Delete NVM Set: Not Supported 00:11:28.721 Extended LBA Formats Supported: Supported 00:11:28.721 Flexible Data Placement Supported: Not Supported 00:11:28.721 00:11:28.721 Controller Memory Buffer Support 00:11:28.721 ================================ 00:11:28.721 Supported: No 00:11:28.721 00:11:28.721 Persistent Memory Region Support 00:11:28.721 ================================ 00:11:28.721 Supported: No 00:11:28.721 00:11:28.721 Admin Command Set Attributes 00:11:28.721 ============================ 00:11:28.721 Security Send/Receive: Not Supported 00:11:28.721 Format NVM: Supported 00:11:28.721 Firmware Activate/Download: Not Supported 00:11:28.721 Namespace Management: Supported 00:11:28.721 Device Self-Test: Not Supported 00:11:28.721 Directives: Supported 00:11:28.721 NVMe-MI: Not Supported 00:11:28.721 Virtualization Management: Not Supported 00:11:28.721 Doorbell Buffer Config: Supported 00:11:28.721 Get LBA Status Capability: Not Supported 00:11:28.721 Command & Feature Lockdown Capability: Not Supported 00:11:28.721 Abort Command Limit: 4 00:11:28.721 Async Event Request Limit: 4 00:11:28.721 Number of Firmware Slots: N/A 00:11:28.721 Firmware Slot 1 Read-Only: N/A 00:11:28.721 Firmware Activation Without Reset: N/A 00:11:28.721 Multiple Update Detection Support: N/A 00:11:28.721 Firmware Update Granularity: No Information Provided 00:11:28.721 Per-Namespace SMART Log: Yes 00:11:28.721 Asymmetric Namespace Access Log Page: Not Supported 00:11:28.721 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:28.721 Command Effects Log Page: Supported 00:11:28.721 Get Log Page Extended Data: Supported 00:11:28.721 Telemetry Log Pages: Not Supported 00:11:28.721 Persistent Event Log Pages: Not Supported 00:11:28.721 Supported Log Pages Log Page: May Support 00:11:28.721 Commands Supported & Effects Log Page: Not Supported 00:11:28.721 Feature Identifiers & Effects Log Page:May Support 00:11:28.721 NVMe-MI Commands & Effects Log Page: May Support 00:11:28.721 Data Area 4 for Telemetry Log: Not Supported 00:11:28.721 Error Log Page Entries Supported: 1 00:11:28.721 Keep Alive: Not Supported 00:11:28.721 00:11:28.721 NVM Command Set Attributes 00:11:28.721 ========================== 00:11:28.721 Submission Queue Entry Size 00:11:28.721 Max: 64 00:11:28.721 Min: 64 00:11:28.721 Completion Queue Entry Size 00:11:28.721 Max: 16 00:11:28.721 Min: 16 00:11:28.721 Number of Namespaces: 256 00:11:28.721 Compare Command: Supported 00:11:28.721 Write Uncorrectable Command: Not Supported 00:11:28.721 Dataset Management Command: Supported 00:11:28.721 Write Zeroes Command: Supported 00:11:28.721 Set Features Save Field: Supported 00:11:28.721 Reservations: Not Supported 00:11:28.721 Timestamp: Supported 00:11:28.721 Copy: Supported 00:11:28.721 Volatile Write Cache: Present 00:11:28.721 Atomic Write Unit (Normal): 1 00:11:28.721 Atomic Write Unit (PFail): 1 00:11:28.721 Atomic Compare & Write Unit: 1 00:11:28.721 Fused Compare & Write: Not Supported 00:11:28.721 Scatter-Gather List 00:11:28.721 SGL Command Set: Supported 00:11:28.721 SGL Keyed: Not Supported 00:11:28.721 SGL Bit Bucket Descriptor: Not Supported 00:11:28.721 SGL Metadata Pointer: Not Supported 00:11:28.721 Oversized SGL: Not Supported 00:11:28.721 SGL Metadata Address: Not Supported 00:11:28.721 SGL Offset: Not Supported 00:11:28.721 Transport SGL Data Block: Not Supported 00:11:28.721 Replay Protected Memory Block: Not Supported 00:11:28.721 00:11:28.721 Firmware Slot Information 00:11:28.722 ========================= 00:11:28.722 Active slot: 1 00:11:28.722 Slot 1 Firmware Revision: 1.0 00:11:28.722 00:11:28.722 00:11:28.722 Commands Supported and Effects 00:11:28.722 ============================== 00:11:28.722 Admin Commands 00:11:28.722 -------------- 00:11:28.722 Delete I/O Submission Queue (00h): Supported 00:11:28.722 Create I/O Submission Queue (01h): Supported 00:11:28.722 Get Log Page (02h): Supported 00:11:28.722 Delete I/O Completion Queue (04h): Supported 00:11:28.722 Create I/O Completion Queue (05h): Supported 00:11:28.722 Identify (06h): Supported 00:11:28.722 Abort (08h): Supported 00:11:28.722 Set Features (09h): Supported 00:11:28.722 Get Features (0Ah): Supported 00:11:28.722 Asynchronous Event Request (0Ch): Supported 00:11:28.722 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:28.722 Directive Send (19h): Supported 00:11:28.722 Directive Receive (1Ah): Supported 00:11:28.722 Virtualization Management (1Ch): Supported 00:11:28.722 Doorbell Buffer Config (7Ch): Supported 00:11:28.722 Format NVM (80h): Supported LBA-Change 00:11:28.722 I/O Commands 00:11:28.722 ------------ 00:11:28.722 Flush (00h): Supported LBA-Change 00:11:28.722 Write (01h): Supported LBA-Change 00:11:28.722 Read (02h): Supported 00:11:28.722 Compare (05h): Supported 00:11:28.722 Write Zeroes (08h): Supported LBA-Change 00:11:28.722 Dataset Management (09h): Supported LBA-Change 00:11:28.722 Unknown (0Ch): Supported 00:11:28.722 Unknown (12h): Supported 00:11:28.722 Copy (19h): Supported LBA-Change 00:11:28.722 Unknown (1Dh): Supported LBA-Change 00:11:28.722 00:11:28.722 Error Log 00:11:28.722 ========= 00:11:28.722 00:11:28.722 Arbitration 00:11:28.722 =========== 00:11:28.722 Arbitration Burst: no limit 00:11:28.722 00:11:28.722 Power Management 00:11:28.722 ================ 00:11:28.722 Number of Power States: 1 00:11:28.722 Current Power State: Power State #0 00:11:28.722 Power State #0: 00:11:28.722 Max Power: 25.00 W 00:11:28.722 Non-Operational State: Operational 00:11:28.722 Entry Latency: 16 microseconds 00:11:28.722 Exit Latency: 4 microseconds 00:11:28.722 Relative Read Throughput: 0 00:11:28.722 Relative Read Latency: 0 00:11:28.722 Relative Write Throughput: 0 00:11:28.722 Relative Write Latency: 0 00:11:28.722 Idle Power: Not Reported 00:11:28.722 Active Power: Not Reported 00:11:28.722 Non-Operational Permissive Mode: Not Supported 00:11:28.722 00:11:28.722 Health Information 00:11:28.722 ================== 00:11:28.722 Critical Warnings: 00:11:28.722 Available Spare Space: OK 00:11:28.722 Temperature: OK 00:11:28.722 Device Reliability: OK 00:11:28.722 Read Only: No 00:11:28.722 Volatile Memory Backup: OK 00:11:28.722 Current Temperature: 323 Kelvin (50 Celsius) 00:11:28.722 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:28.722 Available Spare: 0% 00:11:28.722 Available Spare Threshold: 0% 00:11:28.722 Life Percentage Used: 0% 00:11:28.722 Data Units Read: 2539 00:11:28.722 Data Units Written: 2326 00:11:28.722 Host Read Commands: 100657 00:11:28.722 Host Write Commands: 98926 00:11:28.722 Controller Busy Time: 0 minutes 00:11:28.722 Power Cycles: 0 00:11:28.722 Power On Hours: 0 hours 00:11:28.722 Unsafe Shutdowns: 0 00:11:28.722 Unrecoverable Media Errors: 0 00:11:28.722 Lifetime Error Log Entries: 0 00:11:28.722 Warning Temperature Time: 0 minutes 00:11:28.722 Critical Temperature Time: 0 minutes 00:11:28.722 00:11:28.722 Number of Queues 00:11:28.722 ================ 00:11:28.722 Number of I/O Submission Queues: 64 00:11:28.722 Number of I/O Completion Queues: 64 00:11:28.722 00:11:28.722 ZNS Specific Controller Data 00:11:28.722 ============================ 00:11:28.722 Zone Append Size Limit: 0 00:11:28.722 00:11:28.722 00:11:28.722 Active Namespaces 00:11:28.722 ================= 00:11:28.722 Namespace ID:1 00:11:28.722 Error Recovery Timeout: Unlimited 00:11:28.722 Command Set Identifier: NVM (00h) 00:11:28.722 Deallocate: Supported 00:11:28.722 Deallocated/Unwritten Error: Supported 00:11:28.722 Deallocated Read Value: All 0x00 00:11:28.722 Deallocate in Write Zeroes: Not Supported 00:11:28.722 Deallocated Guard Field: 0xFFFF 00:11:28.722 Flush: Supported 00:11:28.722 Reservation: Not Supported 00:11:28.722 Namespace Sharing Capabilities: Private 00:11:28.722 Size (in LBAs): 1048576 (4GiB) 00:11:28.722 Capacity (in LBAs): 1048576 (4GiB) 00:11:28.722 Utilization (in LBAs): 1048576 (4GiB) 00:11:28.722 Thin Provisioning: Not Supported 00:11:28.722 Per-NS Atomic Units: No 00:11:28.722 Maximum Single Source Range Length: 128 00:11:28.722 Maximum Copy Length: 128 00:11:28.722 Maximum Source Range Count: 128 00:11:28.722 NGUID/EUI64 Never Reused: No 00:11:28.722 Namespace Write Protected: No 00:11:28.722 Number of LBA Formats: 8 00:11:28.722 Current LBA Format: LBA Format #04 00:11:28.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.722 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.722 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.722 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.722 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.722 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.722 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.722 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.722 00:11:28.722 NVM Specific Namespace Data 00:11:28.722 =========================== 00:11:28.722 Logical Block Storage Tag Mask: 0 00:11:28.722 Protection Information Capabilities: 00:11:28.722 16b Guard Protection Information Storage Tag Support: No 00:11:28.722 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.722 Storage Tag Check Read Support: No 00:11:28.722 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Namespace ID:2 00:11:28.722 Error Recovery Timeout: Unlimited 00:11:28.722 Command Set Identifier: NVM (00h) 00:11:28.722 Deallocate: Supported 00:11:28.722 Deallocated/Unwritten Error: Supported 00:11:28.722 Deallocated Read Value: All 0x00 00:11:28.722 Deallocate in Write Zeroes: Not Supported 00:11:28.722 Deallocated Guard Field: 0xFFFF 00:11:28.722 Flush: Supported 00:11:28.722 Reservation: Not Supported 00:11:28.722 Namespace Sharing Capabilities: Private 00:11:28.722 Size (in LBAs): 1048576 (4GiB) 00:11:28.722 Capacity (in LBAs): 1048576 (4GiB) 00:11:28.722 Utilization (in LBAs): 1048576 (4GiB) 00:11:28.722 Thin Provisioning: Not Supported 00:11:28.722 Per-NS Atomic Units: No 00:11:28.722 Maximum Single Source Range Length: 128 00:11:28.722 Maximum Copy Length: 128 00:11:28.722 Maximum Source Range Count: 128 00:11:28.722 NGUID/EUI64 Never Reused: No 00:11:28.722 Namespace Write Protected: No 00:11:28.722 Number of LBA Formats: 8 00:11:28.722 Current LBA Format: LBA Format #04 00:11:28.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.722 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.722 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.722 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.722 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.722 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.722 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.722 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.722 00:11:28.722 NVM Specific Namespace Data 00:11:28.722 =========================== 00:11:28.722 Logical Block Storage Tag Mask: 0 00:11:28.722 Protection Information Capabilities: 00:11:28.722 16b Guard Protection Information Storage Tag Support: No 00:11:28.722 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.722 Storage Tag Check Read Support: No 00:11:28.722 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Namespace ID:3 00:11:28.722 Error Recovery Timeout: Unlimited 00:11:28.722 Command Set Identifier: NVM (00h) 00:11:28.722 Deallocate: Supported 00:11:28.722 Deallocated/Unwritten Error: Supported 00:11:28.722 Deallocated Read Value: All 0x00 00:11:28.722 Deallocate in Write Zeroes: Not Supported 00:11:28.722 Deallocated Guard Field: 0xFFFF 00:11:28.722 Flush: Supported 00:11:28.722 Reservation: Not Supported 00:11:28.722 Namespace Sharing Capabilities: Private 00:11:28.722 Size (in LBAs): 1048576 (4GiB) 00:11:28.722 Capacity (in LBAs): 1048576 (4GiB) 00:11:28.722 Utilization (in LBAs): 1048576 (4GiB) 00:11:28.722 Thin Provisioning: Not Supported 00:11:28.722 Per-NS Atomic Units: No 00:11:28.722 Maximum Single Source Range Length: 128 00:11:28.722 Maximum Copy Length: 128 00:11:28.722 Maximum Source Range Count: 128 00:11:28.722 NGUID/EUI64 Never Reused: No 00:11:28.722 Namespace Write Protected: No 00:11:28.722 Number of LBA Formats: 8 00:11:28.722 Current LBA Format: LBA Format #04 00:11:28.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.722 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.722 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.722 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.722 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.722 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.722 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.722 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.722 00:11:28.722 NVM Specific Namespace Data 00:11:28.722 =========================== 00:11:28.722 Logical Block Storage Tag Mask: 0 00:11:28.722 Protection Information Capabilities: 00:11:28.722 16b Guard Protection Information Storage Tag Support: No 00:11:28.722 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.722 Storage Tag Check Read Support: No 00:11:28.722 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.722 12:21:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:28.722 12:21:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:28.982 ===================================================== 00:11:28.982 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:28.982 ===================================================== 00:11:28.982 Controller Capabilities/Features 00:11:28.982 ================================ 00:11:28.982 Vendor ID: 1b36 00:11:28.982 Subsystem Vendor ID: 1af4 00:11:28.982 Serial Number: 12340 00:11:28.982 Model Number: QEMU NVMe Ctrl 00:11:28.982 Firmware Version: 8.0.0 00:11:28.982 Recommended Arb Burst: 6 00:11:28.982 IEEE OUI Identifier: 00 54 52 00:11:28.982 Multi-path I/O 00:11:28.982 May have multiple subsystem ports: No 00:11:28.982 May have multiple controllers: No 00:11:28.982 Associated with SR-IOV VF: No 00:11:28.982 Max Data Transfer Size: 524288 00:11:28.982 Max Number of Namespaces: 256 00:11:28.982 Max Number of I/O Queues: 64 00:11:28.982 NVMe Specification Version (VS): 1.4 00:11:28.982 NVMe Specification Version (Identify): 1.4 00:11:28.982 Maximum Queue Entries: 2048 00:11:28.982 Contiguous Queues Required: Yes 00:11:28.982 Arbitration Mechanisms Supported 00:11:28.982 Weighted Round Robin: Not Supported 00:11:28.982 Vendor Specific: Not Supported 00:11:28.982 Reset Timeout: 7500 ms 00:11:28.982 Doorbell Stride: 4 bytes 00:11:28.982 NVM Subsystem Reset: Not Supported 00:11:28.982 Command Sets Supported 00:11:28.982 NVM Command Set: Supported 00:11:28.982 Boot Partition: Not Supported 00:11:28.982 Memory Page Size Minimum: 4096 bytes 00:11:28.982 Memory Page Size Maximum: 65536 bytes 00:11:28.982 Persistent Memory Region: Not Supported 00:11:28.982 Optional Asynchronous Events Supported 00:11:28.982 Namespace Attribute Notices: Supported 00:11:28.982 Firmware Activation Notices: Not Supported 00:11:28.982 ANA Change Notices: Not Supported 00:11:28.982 PLE Aggregate Log Change Notices: Not Supported 00:11:28.982 LBA Status Info Alert Notices: Not Supported 00:11:28.982 EGE Aggregate Log Change Notices: Not Supported 00:11:28.982 Normal NVM Subsystem Shutdown event: Not Supported 00:11:28.982 Zone Descriptor Change Notices: Not Supported 00:11:28.982 Discovery Log Change Notices: Not Supported 00:11:28.982 Controller Attributes 00:11:28.982 128-bit Host Identifier: Not Supported 00:11:28.982 Non-Operational Permissive Mode: Not Supported 00:11:28.982 NVM Sets: Not Supported 00:11:28.982 Read Recovery Levels: Not Supported 00:11:28.982 Endurance Groups: Not Supported 00:11:28.982 Predictable Latency Mode: Not Supported 00:11:28.982 Traffic Based Keep ALive: Not Supported 00:11:28.982 Namespace Granularity: Not Supported 00:11:28.982 SQ Associations: Not Supported 00:11:28.982 UUID List: Not Supported 00:11:28.982 Multi-Domain Subsystem: Not Supported 00:11:28.982 Fixed Capacity Management: Not Supported 00:11:28.982 Variable Capacity Management: Not Supported 00:11:28.982 Delete Endurance Group: Not Supported 00:11:28.982 Delete NVM Set: Not Supported 00:11:28.982 Extended LBA Formats Supported: Supported 00:11:28.982 Flexible Data Placement Supported: Not Supported 00:11:28.982 00:11:28.982 Controller Memory Buffer Support 00:11:28.982 ================================ 00:11:28.982 Supported: No 00:11:28.982 00:11:28.982 Persistent Memory Region Support 00:11:28.982 ================================ 00:11:28.982 Supported: No 00:11:28.982 00:11:28.982 Admin Command Set Attributes 00:11:28.982 ============================ 00:11:28.982 Security Send/Receive: Not Supported 00:11:28.982 Format NVM: Supported 00:11:28.982 Firmware Activate/Download: Not Supported 00:11:28.982 Namespace Management: Supported 00:11:28.982 Device Self-Test: Not Supported 00:11:28.982 Directives: Supported 00:11:28.982 NVMe-MI: Not Supported 00:11:28.982 Virtualization Management: Not Supported 00:11:28.982 Doorbell Buffer Config: Supported 00:11:28.982 Get LBA Status Capability: Not Supported 00:11:28.982 Command & Feature Lockdown Capability: Not Supported 00:11:28.982 Abort Command Limit: 4 00:11:28.982 Async Event Request Limit: 4 00:11:28.982 Number of Firmware Slots: N/A 00:11:28.982 Firmware Slot 1 Read-Only: N/A 00:11:28.982 Firmware Activation Without Reset: N/A 00:11:28.982 Multiple Update Detection Support: N/A 00:11:28.982 Firmware Update Granularity: No Information Provided 00:11:28.982 Per-Namespace SMART Log: Yes 00:11:28.982 Asymmetric Namespace Access Log Page: Not Supported 00:11:28.982 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:28.982 Command Effects Log Page: Supported 00:11:28.982 Get Log Page Extended Data: Supported 00:11:28.982 Telemetry Log Pages: Not Supported 00:11:28.982 Persistent Event Log Pages: Not Supported 00:11:28.982 Supported Log Pages Log Page: May Support 00:11:28.982 Commands Supported & Effects Log Page: Not Supported 00:11:28.982 Feature Identifiers & Effects Log Page:May Support 00:11:28.982 NVMe-MI Commands & Effects Log Page: May Support 00:11:28.982 Data Area 4 for Telemetry Log: Not Supported 00:11:28.982 Error Log Page Entries Supported: 1 00:11:28.982 Keep Alive: Not Supported 00:11:28.982 00:11:28.982 NVM Command Set Attributes 00:11:28.982 ========================== 00:11:28.982 Submission Queue Entry Size 00:11:28.982 Max: 64 00:11:28.982 Min: 64 00:11:28.982 Completion Queue Entry Size 00:11:28.982 Max: 16 00:11:28.982 Min: 16 00:11:28.982 Number of Namespaces: 256 00:11:28.982 Compare Command: Supported 00:11:28.982 Write Uncorrectable Command: Not Supported 00:11:28.982 Dataset Management Command: Supported 00:11:28.982 Write Zeroes Command: Supported 00:11:28.982 Set Features Save Field: Supported 00:11:28.982 Reservations: Not Supported 00:11:28.982 Timestamp: Supported 00:11:28.982 Copy: Supported 00:11:28.982 Volatile Write Cache: Present 00:11:28.982 Atomic Write Unit (Normal): 1 00:11:28.982 Atomic Write Unit (PFail): 1 00:11:28.982 Atomic Compare & Write Unit: 1 00:11:28.982 Fused Compare & Write: Not Supported 00:11:28.982 Scatter-Gather List 00:11:28.982 SGL Command Set: Supported 00:11:28.982 SGL Keyed: Not Supported 00:11:28.982 SGL Bit Bucket Descriptor: Not Supported 00:11:28.982 SGL Metadata Pointer: Not Supported 00:11:28.983 Oversized SGL: Not Supported 00:11:28.983 SGL Metadata Address: Not Supported 00:11:28.983 SGL Offset: Not Supported 00:11:28.983 Transport SGL Data Block: Not Supported 00:11:28.983 Replay Protected Memory Block: Not Supported 00:11:28.983 00:11:28.983 Firmware Slot Information 00:11:28.983 ========================= 00:11:28.983 Active slot: 1 00:11:28.983 Slot 1 Firmware Revision: 1.0 00:11:28.983 00:11:28.983 00:11:28.983 Commands Supported and Effects 00:11:28.983 ============================== 00:11:28.983 Admin Commands 00:11:28.983 -------------- 00:11:28.983 Delete I/O Submission Queue (00h): Supported 00:11:28.983 Create I/O Submission Queue (01h): Supported 00:11:28.983 Get Log Page (02h): Supported 00:11:28.983 Delete I/O Completion Queue (04h): Supported 00:11:28.983 Create I/O Completion Queue (05h): Supported 00:11:28.983 Identify (06h): Supported 00:11:28.983 Abort (08h): Supported 00:11:28.983 Set Features (09h): Supported 00:11:28.983 Get Features (0Ah): Supported 00:11:28.983 Asynchronous Event Request (0Ch): Supported 00:11:28.983 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:28.983 Directive Send (19h): Supported 00:11:28.983 Directive Receive (1Ah): Supported 00:11:28.983 Virtualization Management (1Ch): Supported 00:11:28.983 Doorbell Buffer Config (7Ch): Supported 00:11:28.983 Format NVM (80h): Supported LBA-Change 00:11:28.983 I/O Commands 00:11:28.983 ------------ 00:11:28.983 Flush (00h): Supported LBA-Change 00:11:28.983 Write (01h): Supported LBA-Change 00:11:28.983 Read (02h): Supported 00:11:28.983 Compare (05h): Supported 00:11:28.983 Write Zeroes (08h): Supported LBA-Change 00:11:28.983 Dataset Management (09h): Supported LBA-Change 00:11:28.983 Unknown (0Ch): Supported 00:11:28.983 Unknown (12h): Supported 00:11:28.983 Copy (19h): Supported LBA-Change 00:11:28.983 Unknown (1Dh): Supported LBA-Change 00:11:28.983 00:11:28.983 Error Log 00:11:28.983 ========= 00:11:28.983 00:11:28.983 Arbitration 00:11:28.983 =========== 00:11:28.983 Arbitration Burst: no limit 00:11:28.983 00:11:28.983 Power Management 00:11:28.983 ================ 00:11:28.983 Number of Power States: 1 00:11:28.983 Current Power State: Power State #0 00:11:28.983 Power State #0: 00:11:28.983 Max Power: 25.00 W 00:11:28.983 Non-Operational State: Operational 00:11:28.983 Entry Latency: 16 microseconds 00:11:28.983 Exit Latency: 4 microseconds 00:11:28.983 Relative Read Throughput: 0 00:11:28.983 Relative Read Latency: 0 00:11:28.983 Relative Write Throughput: 0 00:11:28.983 Relative Write Latency: 0 00:11:28.983 Idle Power: Not Reported 00:11:28.983 Active Power: Not Reported 00:11:28.983 Non-Operational Permissive Mode: Not Supported 00:11:28.983 00:11:28.983 Health Information 00:11:28.983 ================== 00:11:28.983 Critical Warnings: 00:11:28.983 Available Spare Space: OK 00:11:28.983 Temperature: OK 00:11:28.983 Device Reliability: OK 00:11:28.983 Read Only: No 00:11:28.983 Volatile Memory Backup: OK 00:11:28.983 Current Temperature: 323 Kelvin (50 Celsius) 00:11:28.983 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:28.983 Available Spare: 0% 00:11:28.983 Available Spare Threshold: 0% 00:11:28.983 Life Percentage Used: 0% 00:11:28.983 Data Units Read: 751 00:11:28.983 Data Units Written: 679 00:11:28.983 Host Read Commands: 32589 00:11:28.983 Host Write Commands: 32375 00:11:28.983 Controller Busy Time: 0 minutes 00:11:28.983 Power Cycles: 0 00:11:28.983 Power On Hours: 0 hours 00:11:28.983 Unsafe Shutdowns: 0 00:11:28.983 Unrecoverable Media Errors: 0 00:11:28.983 Lifetime Error Log Entries: 0 00:11:28.983 Warning Temperature Time: 0 minutes 00:11:28.983 Critical Temperature Time: 0 minutes 00:11:28.983 00:11:28.983 Number of Queues 00:11:28.983 ================ 00:11:28.983 Number of I/O Submission Queues: 64 00:11:28.983 Number of I/O Completion Queues: 64 00:11:28.983 00:11:28.983 ZNS Specific Controller Data 00:11:28.983 ============================ 00:11:28.983 Zone Append Size Limit: 0 00:11:28.983 00:11:28.983 00:11:28.983 Active Namespaces 00:11:28.983 ================= 00:11:28.983 Namespace ID:1 00:11:28.983 Error Recovery Timeout: Unlimited 00:11:28.983 Command Set Identifier: NVM (00h) 00:11:28.983 Deallocate: Supported 00:11:28.983 Deallocated/Unwritten Error: Supported 00:11:28.983 Deallocated Read Value: All 0x00 00:11:28.983 Deallocate in Write Zeroes: Not Supported 00:11:28.983 Deallocated Guard Field: 0xFFFF 00:11:28.983 Flush: Supported 00:11:28.983 Reservation: Not Supported 00:11:28.983 Metadata Transferred as: Separate Metadata Buffer 00:11:28.983 Namespace Sharing Capabilities: Private 00:11:28.983 Size (in LBAs): 1548666 (5GiB) 00:11:28.983 Capacity (in LBAs): 1548666 (5GiB) 00:11:28.983 Utilization (in LBAs): 1548666 (5GiB) 00:11:28.983 Thin Provisioning: Not Supported 00:11:28.983 Per-NS Atomic Units: No 00:11:28.983 Maximum Single Source Range Length: 128 00:11:28.983 Maximum Copy Length: 128 00:11:28.983 Maximum Source Range Count: 128 00:11:28.983 NGUID/EUI64 Never Reused: No 00:11:28.983 Namespace Write Protected: No 00:11:28.983 Number of LBA Formats: 8 00:11:28.983 Current LBA Format: LBA Format #07 00:11:28.983 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.983 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.983 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.983 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.983 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.983 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.983 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.983 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.983 00:11:28.983 NVM Specific Namespace Data 00:11:28.983 =========================== 00:11:28.983 Logical Block Storage Tag Mask: 0 00:11:28.983 Protection Information Capabilities: 00:11:28.983 16b Guard Protection Information Storage Tag Support: No 00:11:28.983 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.983 Storage Tag Check Read Support: No 00:11:28.983 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.983 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.983 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.983 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.983 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.983 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.983 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.983 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.983 12:21:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:28.983 12:21:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:29.242 ===================================================== 00:11:29.242 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:29.242 ===================================================== 00:11:29.242 Controller Capabilities/Features 00:11:29.242 ================================ 00:11:29.242 Vendor ID: 1b36 00:11:29.242 Subsystem Vendor ID: 1af4 00:11:29.242 Serial Number: 12341 00:11:29.242 Model Number: QEMU NVMe Ctrl 00:11:29.242 Firmware Version: 8.0.0 00:11:29.242 Recommended Arb Burst: 6 00:11:29.242 IEEE OUI Identifier: 00 54 52 00:11:29.242 Multi-path I/O 00:11:29.242 May have multiple subsystem ports: No 00:11:29.242 May have multiple controllers: No 00:11:29.242 Associated with SR-IOV VF: No 00:11:29.242 Max Data Transfer Size: 524288 00:11:29.242 Max Number of Namespaces: 256 00:11:29.242 Max Number of I/O Queues: 64 00:11:29.242 NVMe Specification Version (VS): 1.4 00:11:29.242 NVMe Specification Version (Identify): 1.4 00:11:29.242 Maximum Queue Entries: 2048 00:11:29.242 Contiguous Queues Required: Yes 00:11:29.242 Arbitration Mechanisms Supported 00:11:29.242 Weighted Round Robin: Not Supported 00:11:29.242 Vendor Specific: Not Supported 00:11:29.242 Reset Timeout: 7500 ms 00:11:29.242 Doorbell Stride: 4 bytes 00:11:29.242 NVM Subsystem Reset: Not Supported 00:11:29.242 Command Sets Supported 00:11:29.242 NVM Command Set: Supported 00:11:29.242 Boot Partition: Not Supported 00:11:29.242 Memory Page Size Minimum: 4096 bytes 00:11:29.242 Memory Page Size Maximum: 65536 bytes 00:11:29.242 Persistent Memory Region: Not Supported 00:11:29.242 Optional Asynchronous Events Supported 00:11:29.242 Namespace Attribute Notices: Supported 00:11:29.242 Firmware Activation Notices: Not Supported 00:11:29.242 ANA Change Notices: Not Supported 00:11:29.242 PLE Aggregate Log Change Notices: Not Supported 00:11:29.242 LBA Status Info Alert Notices: Not Supported 00:11:29.242 EGE Aggregate Log Change Notices: Not Supported 00:11:29.242 Normal NVM Subsystem Shutdown event: Not Supported 00:11:29.242 Zone Descriptor Change Notices: Not Supported 00:11:29.242 Discovery Log Change Notices: Not Supported 00:11:29.242 Controller Attributes 00:11:29.242 128-bit Host Identifier: Not Supported 00:11:29.242 Non-Operational Permissive Mode: Not Supported 00:11:29.242 NVM Sets: Not Supported 00:11:29.242 Read Recovery Levels: Not Supported 00:11:29.242 Endurance Groups: Not Supported 00:11:29.242 Predictable Latency Mode: Not Supported 00:11:29.242 Traffic Based Keep ALive: Not Supported 00:11:29.242 Namespace Granularity: Not Supported 00:11:29.242 SQ Associations: Not Supported 00:11:29.242 UUID List: Not Supported 00:11:29.242 Multi-Domain Subsystem: Not Supported 00:11:29.242 Fixed Capacity Management: Not Supported 00:11:29.242 Variable Capacity Management: Not Supported 00:11:29.242 Delete Endurance Group: Not Supported 00:11:29.242 Delete NVM Set: Not Supported 00:11:29.242 Extended LBA Formats Supported: Supported 00:11:29.242 Flexible Data Placement Supported: Not Supported 00:11:29.242 00:11:29.242 Controller Memory Buffer Support 00:11:29.242 ================================ 00:11:29.242 Supported: No 00:11:29.242 00:11:29.242 Persistent Memory Region Support 00:11:29.242 ================================ 00:11:29.242 Supported: No 00:11:29.242 00:11:29.242 Admin Command Set Attributes 00:11:29.242 ============================ 00:11:29.242 Security Send/Receive: Not Supported 00:11:29.242 Format NVM: Supported 00:11:29.242 Firmware Activate/Download: Not Supported 00:11:29.242 Namespace Management: Supported 00:11:29.242 Device Self-Test: Not Supported 00:11:29.242 Directives: Supported 00:11:29.242 NVMe-MI: Not Supported 00:11:29.242 Virtualization Management: Not Supported 00:11:29.242 Doorbell Buffer Config: Supported 00:11:29.242 Get LBA Status Capability: Not Supported 00:11:29.242 Command & Feature Lockdown Capability: Not Supported 00:11:29.243 Abort Command Limit: 4 00:11:29.243 Async Event Request Limit: 4 00:11:29.243 Number of Firmware Slots: N/A 00:11:29.243 Firmware Slot 1 Read-Only: N/A 00:11:29.243 Firmware Activation Without Reset: N/A 00:11:29.243 Multiple Update Detection Support: N/A 00:11:29.243 Firmware Update Granularity: No Information Provided 00:11:29.243 Per-Namespace SMART Log: Yes 00:11:29.243 Asymmetric Namespace Access Log Page: Not Supported 00:11:29.243 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:29.243 Command Effects Log Page: Supported 00:11:29.243 Get Log Page Extended Data: Supported 00:11:29.243 Telemetry Log Pages: Not Supported 00:11:29.243 Persistent Event Log Pages: Not Supported 00:11:29.243 Supported Log Pages Log Page: May Support 00:11:29.243 Commands Supported & Effects Log Page: Not Supported 00:11:29.243 Feature Identifiers & Effects Log Page:May Support 00:11:29.243 NVMe-MI Commands & Effects Log Page: May Support 00:11:29.243 Data Area 4 for Telemetry Log: Not Supported 00:11:29.243 Error Log Page Entries Supported: 1 00:11:29.243 Keep Alive: Not Supported 00:11:29.243 00:11:29.243 NVM Command Set Attributes 00:11:29.243 ========================== 00:11:29.243 Submission Queue Entry Size 00:11:29.243 Max: 64 00:11:29.243 Min: 64 00:11:29.243 Completion Queue Entry Size 00:11:29.243 Max: 16 00:11:29.243 Min: 16 00:11:29.243 Number of Namespaces: 256 00:11:29.243 Compare Command: Supported 00:11:29.243 Write Uncorrectable Command: Not Supported 00:11:29.243 Dataset Management Command: Supported 00:11:29.243 Write Zeroes Command: Supported 00:11:29.243 Set Features Save Field: Supported 00:11:29.243 Reservations: Not Supported 00:11:29.243 Timestamp: Supported 00:11:29.243 Copy: Supported 00:11:29.243 Volatile Write Cache: Present 00:11:29.243 Atomic Write Unit (Normal): 1 00:11:29.243 Atomic Write Unit (PFail): 1 00:11:29.243 Atomic Compare & Write Unit: 1 00:11:29.243 Fused Compare & Write: Not Supported 00:11:29.243 Scatter-Gather List 00:11:29.243 SGL Command Set: Supported 00:11:29.243 SGL Keyed: Not Supported 00:11:29.243 SGL Bit Bucket Descriptor: Not Supported 00:11:29.243 SGL Metadata Pointer: Not Supported 00:11:29.243 Oversized SGL: Not Supported 00:11:29.243 SGL Metadata Address: Not Supported 00:11:29.243 SGL Offset: Not Supported 00:11:29.243 Transport SGL Data Block: Not Supported 00:11:29.243 Replay Protected Memory Block: Not Supported 00:11:29.243 00:11:29.243 Firmware Slot Information 00:11:29.243 ========================= 00:11:29.243 Active slot: 1 00:11:29.243 Slot 1 Firmware Revision: 1.0 00:11:29.243 00:11:29.243 00:11:29.243 Commands Supported and Effects 00:11:29.243 ============================== 00:11:29.243 Admin Commands 00:11:29.243 -------------- 00:11:29.243 Delete I/O Submission Queue (00h): Supported 00:11:29.243 Create I/O Submission Queue (01h): Supported 00:11:29.243 Get Log Page (02h): Supported 00:11:29.243 Delete I/O Completion Queue (04h): Supported 00:11:29.243 Create I/O Completion Queue (05h): Supported 00:11:29.243 Identify (06h): Supported 00:11:29.243 Abort (08h): Supported 00:11:29.243 Set Features (09h): Supported 00:11:29.243 Get Features (0Ah): Supported 00:11:29.243 Asynchronous Event Request (0Ch): Supported 00:11:29.243 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:29.243 Directive Send (19h): Supported 00:11:29.243 Directive Receive (1Ah): Supported 00:11:29.243 Virtualization Management (1Ch): Supported 00:11:29.243 Doorbell Buffer Config (7Ch): Supported 00:11:29.243 Format NVM (80h): Supported LBA-Change 00:11:29.243 I/O Commands 00:11:29.243 ------------ 00:11:29.243 Flush (00h): Supported LBA-Change 00:11:29.243 Write (01h): Supported LBA-Change 00:11:29.243 Read (02h): Supported 00:11:29.243 Compare (05h): Supported 00:11:29.243 Write Zeroes (08h): Supported LBA-Change 00:11:29.243 Dataset Management (09h): Supported LBA-Change 00:11:29.243 Unknown (0Ch): Supported 00:11:29.243 Unknown (12h): Supported 00:11:29.243 Copy (19h): Supported LBA-Change 00:11:29.243 Unknown (1Dh): Supported LBA-Change 00:11:29.243 00:11:29.243 Error Log 00:11:29.243 ========= 00:11:29.243 00:11:29.243 Arbitration 00:11:29.243 =========== 00:11:29.243 Arbitration Burst: no limit 00:11:29.243 00:11:29.243 Power Management 00:11:29.243 ================ 00:11:29.243 Number of Power States: 1 00:11:29.243 Current Power State: Power State #0 00:11:29.243 Power State #0: 00:11:29.243 Max Power: 25.00 W 00:11:29.243 Non-Operational State: Operational 00:11:29.243 Entry Latency: 16 microseconds 00:11:29.243 Exit Latency: 4 microseconds 00:11:29.243 Relative Read Throughput: 0 00:11:29.243 Relative Read Latency: 0 00:11:29.243 Relative Write Throughput: 0 00:11:29.243 Relative Write Latency: 0 00:11:29.243 Idle Power: Not Reported 00:11:29.243 Active Power: Not Reported 00:11:29.243 Non-Operational Permissive Mode: Not Supported 00:11:29.243 00:11:29.243 Health Information 00:11:29.243 ================== 00:11:29.243 Critical Warnings: 00:11:29.243 Available Spare Space: OK 00:11:29.243 Temperature: OK 00:11:29.243 Device Reliability: OK 00:11:29.243 Read Only: No 00:11:29.243 Volatile Memory Backup: OK 00:11:29.243 Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.243 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:29.243 Available Spare: 0% 00:11:29.243 Available Spare Threshold: 0% 00:11:29.243 Life Percentage Used: 0% 00:11:29.243 Data Units Read: 1157 00:11:29.243 Data Units Written: 1030 00:11:29.243 Host Read Commands: 48290 00:11:29.243 Host Write Commands: 47178 00:11:29.243 Controller Busy Time: 0 minutes 00:11:29.243 Power Cycles: 0 00:11:29.243 Power On Hours: 0 hours 00:11:29.243 Unsafe Shutdowns: 0 00:11:29.243 Unrecoverable Media Errors: 0 00:11:29.243 Lifetime Error Log Entries: 0 00:11:29.243 Warning Temperature Time: 0 minutes 00:11:29.243 Critical Temperature Time: 0 minutes 00:11:29.243 00:11:29.243 Number of Queues 00:11:29.243 ================ 00:11:29.243 Number of I/O Submission Queues: 64 00:11:29.243 Number of I/O Completion Queues: 64 00:11:29.243 00:11:29.243 ZNS Specific Controller Data 00:11:29.243 ============================ 00:11:29.243 Zone Append Size Limit: 0 00:11:29.243 00:11:29.243 00:11:29.243 Active Namespaces 00:11:29.243 ================= 00:11:29.243 Namespace ID:1 00:11:29.243 Error Recovery Timeout: Unlimited 00:11:29.243 Command Set Identifier: NVM (00h) 00:11:29.243 Deallocate: Supported 00:11:29.243 Deallocated/Unwritten Error: Supported 00:11:29.243 Deallocated Read Value: All 0x00 00:11:29.243 Deallocate in Write Zeroes: Not Supported 00:11:29.243 Deallocated Guard Field: 0xFFFF 00:11:29.243 Flush: Supported 00:11:29.243 Reservation: Not Supported 00:11:29.243 Namespace Sharing Capabilities: Private 00:11:29.243 Size (in LBAs): 1310720 (5GiB) 00:11:29.243 Capacity (in LBAs): 1310720 (5GiB) 00:11:29.243 Utilization (in LBAs): 1310720 (5GiB) 00:11:29.243 Thin Provisioning: Not Supported 00:11:29.243 Per-NS Atomic Units: No 00:11:29.243 Maximum Single Source Range Length: 128 00:11:29.243 Maximum Copy Length: 128 00:11:29.243 Maximum Source Range Count: 128 00:11:29.243 NGUID/EUI64 Never Reused: No 00:11:29.243 Namespace Write Protected: No 00:11:29.243 Number of LBA Formats: 8 00:11:29.243 Current LBA Format: LBA Format #04 00:11:29.243 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:29.243 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:29.243 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:29.243 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:29.243 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:29.243 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:29.243 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:29.243 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:29.243 00:11:29.243 NVM Specific Namespace Data 00:11:29.243 =========================== 00:11:29.243 Logical Block Storage Tag Mask: 0 00:11:29.243 Protection Information Capabilities: 00:11:29.243 16b Guard Protection Information Storage Tag Support: No 00:11:29.243 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:29.243 Storage Tag Check Read Support: No 00:11:29.243 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.243 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.243 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.243 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.243 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.244 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.244 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.244 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.244 12:21:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:29.244 12:21:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:29.812 ===================================================== 00:11:29.812 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:29.812 ===================================================== 00:11:29.812 Controller Capabilities/Features 00:11:29.812 ================================ 00:11:29.812 Vendor ID: 1b36 00:11:29.812 Subsystem Vendor ID: 1af4 00:11:29.812 Serial Number: 12342 00:11:29.812 Model Number: QEMU NVMe Ctrl 00:11:29.812 Firmware Version: 8.0.0 00:11:29.812 Recommended Arb Burst: 6 00:11:29.812 IEEE OUI Identifier: 00 54 52 00:11:29.812 Multi-path I/O 00:11:29.812 May have multiple subsystem ports: No 00:11:29.812 May have multiple controllers: No 00:11:29.812 Associated with SR-IOV VF: No 00:11:29.812 Max Data Transfer Size: 524288 00:11:29.812 Max Number of Namespaces: 256 00:11:29.812 Max Number of I/O Queues: 64 00:11:29.812 NVMe Specification Version (VS): 1.4 00:11:29.812 NVMe Specification Version (Identify): 1.4 00:11:29.812 Maximum Queue Entries: 2048 00:11:29.812 Contiguous Queues Required: Yes 00:11:29.812 Arbitration Mechanisms Supported 00:11:29.812 Weighted Round Robin: Not Supported 00:11:29.812 Vendor Specific: Not Supported 00:11:29.812 Reset Timeout: 7500 ms 00:11:29.812 Doorbell Stride: 4 bytes 00:11:29.812 NVM Subsystem Reset: Not Supported 00:11:29.812 Command Sets Supported 00:11:29.812 NVM Command Set: Supported 00:11:29.812 Boot Partition: Not Supported 00:11:29.812 Memory Page Size Minimum: 4096 bytes 00:11:29.812 Memory Page Size Maximum: 65536 bytes 00:11:29.812 Persistent Memory Region: Not Supported 00:11:29.812 Optional Asynchronous Events Supported 00:11:29.812 Namespace Attribute Notices: Supported 00:11:29.812 Firmware Activation Notices: Not Supported 00:11:29.812 ANA Change Notices: Not Supported 00:11:29.812 PLE Aggregate Log Change Notices: Not Supported 00:11:29.812 LBA Status Info Alert Notices: Not Supported 00:11:29.812 EGE Aggregate Log Change Notices: Not Supported 00:11:29.812 Normal NVM Subsystem Shutdown event: Not Supported 00:11:29.812 Zone Descriptor Change Notices: Not Supported 00:11:29.812 Discovery Log Change Notices: Not Supported 00:11:29.812 Controller Attributes 00:11:29.812 128-bit Host Identifier: Not Supported 00:11:29.812 Non-Operational Permissive Mode: Not Supported 00:11:29.812 NVM Sets: Not Supported 00:11:29.812 Read Recovery Levels: Not Supported 00:11:29.812 Endurance Groups: Not Supported 00:11:29.812 Predictable Latency Mode: Not Supported 00:11:29.812 Traffic Based Keep ALive: Not Supported 00:11:29.812 Namespace Granularity: Not Supported 00:11:29.812 SQ Associations: Not Supported 00:11:29.812 UUID List: Not Supported 00:11:29.812 Multi-Domain Subsystem: Not Supported 00:11:29.812 Fixed Capacity Management: Not Supported 00:11:29.812 Variable Capacity Management: Not Supported 00:11:29.812 Delete Endurance Group: Not Supported 00:11:29.812 Delete NVM Set: Not Supported 00:11:29.812 Extended LBA Formats Supported: Supported 00:11:29.812 Flexible Data Placement Supported: Not Supported 00:11:29.812 00:11:29.812 Controller Memory Buffer Support 00:11:29.812 ================================ 00:11:29.812 Supported: No 00:11:29.812 00:11:29.812 Persistent Memory Region Support 00:11:29.812 ================================ 00:11:29.812 Supported: No 00:11:29.812 00:11:29.812 Admin Command Set Attributes 00:11:29.812 ============================ 00:11:29.812 Security Send/Receive: Not Supported 00:11:29.812 Format NVM: Supported 00:11:29.812 Firmware Activate/Download: Not Supported 00:11:29.812 Namespace Management: Supported 00:11:29.812 Device Self-Test: Not Supported 00:11:29.812 Directives: Supported 00:11:29.812 NVMe-MI: Not Supported 00:11:29.812 Virtualization Management: Not Supported 00:11:29.813 Doorbell Buffer Config: Supported 00:11:29.813 Get LBA Status Capability: Not Supported 00:11:29.813 Command & Feature Lockdown Capability: Not Supported 00:11:29.813 Abort Command Limit: 4 00:11:29.813 Async Event Request Limit: 4 00:11:29.813 Number of Firmware Slots: N/A 00:11:29.813 Firmware Slot 1 Read-Only: N/A 00:11:29.813 Firmware Activation Without Reset: N/A 00:11:29.813 Multiple Update Detection Support: N/A 00:11:29.813 Firmware Update Granularity: No Information Provided 00:11:29.813 Per-Namespace SMART Log: Yes 00:11:29.813 Asymmetric Namespace Access Log Page: Not Supported 00:11:29.813 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:29.813 Command Effects Log Page: Supported 00:11:29.813 Get Log Page Extended Data: Supported 00:11:29.813 Telemetry Log Pages: Not Supported 00:11:29.813 Persistent Event Log Pages: Not Supported 00:11:29.813 Supported Log Pages Log Page: May Support 00:11:29.813 Commands Supported & Effects Log Page: Not Supported 00:11:29.813 Feature Identifiers & Effects Log Page:May Support 00:11:29.813 NVMe-MI Commands & Effects Log Page: May Support 00:11:29.813 Data Area 4 for Telemetry Log: Not Supported 00:11:29.813 Error Log Page Entries Supported: 1 00:11:29.813 Keep Alive: Not Supported 00:11:29.813 00:11:29.813 NVM Command Set Attributes 00:11:29.813 ========================== 00:11:29.813 Submission Queue Entry Size 00:11:29.813 Max: 64 00:11:29.813 Min: 64 00:11:29.813 Completion Queue Entry Size 00:11:29.813 Max: 16 00:11:29.813 Min: 16 00:11:29.813 Number of Namespaces: 256 00:11:29.813 Compare Command: Supported 00:11:29.813 Write Uncorrectable Command: Not Supported 00:11:29.813 Dataset Management Command: Supported 00:11:29.813 Write Zeroes Command: Supported 00:11:29.813 Set Features Save Field: Supported 00:11:29.813 Reservations: Not Supported 00:11:29.813 Timestamp: Supported 00:11:29.813 Copy: Supported 00:11:29.813 Volatile Write Cache: Present 00:11:29.813 Atomic Write Unit (Normal): 1 00:11:29.813 Atomic Write Unit (PFail): 1 00:11:29.813 Atomic Compare & Write Unit: 1 00:11:29.813 Fused Compare & Write: Not Supported 00:11:29.813 Scatter-Gather List 00:11:29.813 SGL Command Set: Supported 00:11:29.813 SGL Keyed: Not Supported 00:11:29.813 SGL Bit Bucket Descriptor: Not Supported 00:11:29.813 SGL Metadata Pointer: Not Supported 00:11:29.813 Oversized SGL: Not Supported 00:11:29.813 SGL Metadata Address: Not Supported 00:11:29.813 SGL Offset: Not Supported 00:11:29.813 Transport SGL Data Block: Not Supported 00:11:29.813 Replay Protected Memory Block: Not Supported 00:11:29.813 00:11:29.813 Firmware Slot Information 00:11:29.813 ========================= 00:11:29.813 Active slot: 1 00:11:29.813 Slot 1 Firmware Revision: 1.0 00:11:29.813 00:11:29.813 00:11:29.813 Commands Supported and Effects 00:11:29.813 ============================== 00:11:29.813 Admin Commands 00:11:29.813 -------------- 00:11:29.813 Delete I/O Submission Queue (00h): Supported 00:11:29.813 Create I/O Submission Queue (01h): Supported 00:11:29.813 Get Log Page (02h): Supported 00:11:29.813 Delete I/O Completion Queue (04h): Supported 00:11:29.813 Create I/O Completion Queue (05h): Supported 00:11:29.813 Identify (06h): Supported 00:11:29.813 Abort (08h): Supported 00:11:29.813 Set Features (09h): Supported 00:11:29.813 Get Features (0Ah): Supported 00:11:29.813 Asynchronous Event Request (0Ch): Supported 00:11:29.813 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:29.813 Directive Send (19h): Supported 00:11:29.813 Directive Receive (1Ah): Supported 00:11:29.813 Virtualization Management (1Ch): Supported 00:11:29.813 Doorbell Buffer Config (7Ch): Supported 00:11:29.813 Format NVM (80h): Supported LBA-Change 00:11:29.813 I/O Commands 00:11:29.813 ------------ 00:11:29.813 Flush (00h): Supported LBA-Change 00:11:29.813 Write (01h): Supported LBA-Change 00:11:29.813 Read (02h): Supported 00:11:29.813 Compare (05h): Supported 00:11:29.813 Write Zeroes (08h): Supported LBA-Change 00:11:29.813 Dataset Management (09h): Supported LBA-Change 00:11:29.813 Unknown (0Ch): Supported 00:11:29.813 Unknown (12h): Supported 00:11:29.813 Copy (19h): Supported LBA-Change 00:11:29.813 Unknown (1Dh): Supported LBA-Change 00:11:29.813 00:11:29.813 Error Log 00:11:29.813 ========= 00:11:29.813 00:11:29.813 Arbitration 00:11:29.813 =========== 00:11:29.813 Arbitration Burst: no limit 00:11:29.813 00:11:29.813 Power Management 00:11:29.813 ================ 00:11:29.813 Number of Power States: 1 00:11:29.813 Current Power State: Power State #0 00:11:29.813 Power State #0: 00:11:29.813 Max Power: 25.00 W 00:11:29.813 Non-Operational State: Operational 00:11:29.813 Entry Latency: 16 microseconds 00:11:29.813 Exit Latency: 4 microseconds 00:11:29.813 Relative Read Throughput: 0 00:11:29.813 Relative Read Latency: 0 00:11:29.813 Relative Write Throughput: 0 00:11:29.813 Relative Write Latency: 0 00:11:29.813 Idle Power: Not Reported 00:11:29.813 Active Power: Not Reported 00:11:29.813 Non-Operational Permissive Mode: Not Supported 00:11:29.813 00:11:29.813 Health Information 00:11:29.813 ================== 00:11:29.813 Critical Warnings: 00:11:29.813 Available Spare Space: OK 00:11:29.813 Temperature: OK 00:11:29.813 Device Reliability: OK 00:11:29.813 Read Only: No 00:11:29.813 Volatile Memory Backup: OK 00:11:29.813 Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.813 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:29.813 Available Spare: 0% 00:11:29.813 Available Spare Threshold: 0% 00:11:29.813 Life Percentage Used: 0% 00:11:29.813 Data Units Read: 2539 00:11:29.813 Data Units Written: 2326 00:11:29.813 Host Read Commands: 100657 00:11:29.813 Host Write Commands: 98926 00:11:29.813 Controller Busy Time: 0 minutes 00:11:29.813 Power Cycles: 0 00:11:29.813 Power On Hours: 0 hours 00:11:29.813 Unsafe Shutdowns: 0 00:11:29.813 Unrecoverable Media Errors: 0 00:11:29.813 Lifetime Error Log Entries: 0 00:11:29.813 Warning Temperature Time: 0 minutes 00:11:29.813 Critical Temperature Time: 0 minutes 00:11:29.813 00:11:29.813 Number of Queues 00:11:29.813 ================ 00:11:29.813 Number of I/O Submission Queues: 64 00:11:29.813 Number of I/O Completion Queues: 64 00:11:29.813 00:11:29.813 ZNS Specific Controller Data 00:11:29.813 ============================ 00:11:29.813 Zone Append Size Limit: 0 00:11:29.813 00:11:29.813 00:11:29.813 Active Namespaces 00:11:29.813 ================= 00:11:29.813 Namespace ID:1 00:11:29.813 Error Recovery Timeout: Unlimited 00:11:29.813 Command Set Identifier: NVM (00h) 00:11:29.813 Deallocate: Supported 00:11:29.813 Deallocated/Unwritten Error: Supported 00:11:29.813 Deallocated Read Value: All 0x00 00:11:29.813 Deallocate in Write Zeroes: Not Supported 00:11:29.813 Deallocated Guard Field: 0xFFFF 00:11:29.813 Flush: Supported 00:11:29.813 Reservation: Not Supported 00:11:29.813 Namespace Sharing Capabilities: Private 00:11:29.813 Size (in LBAs): 1048576 (4GiB) 00:11:29.813 Capacity (in LBAs): 1048576 (4GiB) 00:11:29.813 Utilization (in LBAs): 1048576 (4GiB) 00:11:29.813 Thin Provisioning: Not Supported 00:11:29.813 Per-NS Atomic Units: No 00:11:29.813 Maximum Single Source Range Length: 128 00:11:29.813 Maximum Copy Length: 128 00:11:29.813 Maximum Source Range Count: 128 00:11:29.813 NGUID/EUI64 Never Reused: No 00:11:29.813 Namespace Write Protected: No 00:11:29.813 Number of LBA Formats: 8 00:11:29.813 Current LBA Format: LBA Format #04 00:11:29.813 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:29.813 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:29.813 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:29.813 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:29.813 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:29.813 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:29.813 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:29.813 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:29.813 00:11:29.813 NVM Specific Namespace Data 00:11:29.813 =========================== 00:11:29.813 Logical Block Storage Tag Mask: 0 00:11:29.813 Protection Information Capabilities: 00:11:29.813 16b Guard Protection Information Storage Tag Support: No 00:11:29.813 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:29.813 Storage Tag Check Read Support: No 00:11:29.813 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.813 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.813 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.813 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.813 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.813 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.813 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.813 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.813 Namespace ID:2 00:11:29.813 Error Recovery Timeout: Unlimited 00:11:29.813 Command Set Identifier: NVM (00h) 00:11:29.814 Deallocate: Supported 00:11:29.814 Deallocated/Unwritten Error: Supported 00:11:29.814 Deallocated Read Value: All 0x00 00:11:29.814 Deallocate in Write Zeroes: Not Supported 00:11:29.814 Deallocated Guard Field: 0xFFFF 00:11:29.814 Flush: Supported 00:11:29.814 Reservation: Not Supported 00:11:29.814 Namespace Sharing Capabilities: Private 00:11:29.814 Size (in LBAs): 1048576 (4GiB) 00:11:29.814 Capacity (in LBAs): 1048576 (4GiB) 00:11:29.814 Utilization (in LBAs): 1048576 (4GiB) 00:11:29.814 Thin Provisioning: Not Supported 00:11:29.814 Per-NS Atomic Units: No 00:11:29.814 Maximum Single Source Range Length: 128 00:11:29.814 Maximum Copy Length: 128 00:11:29.814 Maximum Source Range Count: 128 00:11:29.814 NGUID/EUI64 Never Reused: No 00:11:29.814 Namespace Write Protected: No 00:11:29.814 Number of LBA Formats: 8 00:11:29.814 Current LBA Format: LBA Format #04 00:11:29.814 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:29.814 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:29.814 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:29.814 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:29.814 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:29.814 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:29.814 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:29.814 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:29.814 00:11:29.814 NVM Specific Namespace Data 00:11:29.814 =========================== 00:11:29.814 Logical Block Storage Tag Mask: 0 00:11:29.814 Protection Information Capabilities: 00:11:29.814 16b Guard Protection Information Storage Tag Support: No 00:11:29.814 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:29.814 Storage Tag Check Read Support: No 00:11:29.814 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Namespace ID:3 00:11:29.814 Error Recovery Timeout: Unlimited 00:11:29.814 Command Set Identifier: NVM (00h) 00:11:29.814 Deallocate: Supported 00:11:29.814 Deallocated/Unwritten Error: Supported 00:11:29.814 Deallocated Read Value: All 0x00 00:11:29.814 Deallocate in Write Zeroes: Not Supported 00:11:29.814 Deallocated Guard Field: 0xFFFF 00:11:29.814 Flush: Supported 00:11:29.814 Reservation: Not Supported 00:11:29.814 Namespace Sharing Capabilities: Private 00:11:29.814 Size (in LBAs): 1048576 (4GiB) 00:11:29.814 Capacity (in LBAs): 1048576 (4GiB) 00:11:29.814 Utilization (in LBAs): 1048576 (4GiB) 00:11:29.814 Thin Provisioning: Not Supported 00:11:29.814 Per-NS Atomic Units: No 00:11:29.814 Maximum Single Source Range Length: 128 00:11:29.814 Maximum Copy Length: 128 00:11:29.814 Maximum Source Range Count: 128 00:11:29.814 NGUID/EUI64 Never Reused: No 00:11:29.814 Namespace Write Protected: No 00:11:29.814 Number of LBA Formats: 8 00:11:29.814 Current LBA Format: LBA Format #04 00:11:29.814 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:29.814 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:29.814 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:29.814 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:29.814 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:29.814 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:29.814 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:29.814 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:29.814 00:11:29.814 NVM Specific Namespace Data 00:11:29.814 =========================== 00:11:29.814 Logical Block Storage Tag Mask: 0 00:11:29.814 Protection Information Capabilities: 00:11:29.814 16b Guard Protection Information Storage Tag Support: No 00:11:29.814 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:29.814 Storage Tag Check Read Support: No 00:11:29.814 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.814 12:21:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:29.814 12:21:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:30.073 ===================================================== 00:11:30.073 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:30.073 ===================================================== 00:11:30.073 Controller Capabilities/Features 00:11:30.073 ================================ 00:11:30.073 Vendor ID: 1b36 00:11:30.073 Subsystem Vendor ID: 1af4 00:11:30.073 Serial Number: 12343 00:11:30.073 Model Number: QEMU NVMe Ctrl 00:11:30.073 Firmware Version: 8.0.0 00:11:30.073 Recommended Arb Burst: 6 00:11:30.073 IEEE OUI Identifier: 00 54 52 00:11:30.073 Multi-path I/O 00:11:30.073 May have multiple subsystem ports: No 00:11:30.073 May have multiple controllers: Yes 00:11:30.073 Associated with SR-IOV VF: No 00:11:30.073 Max Data Transfer Size: 524288 00:11:30.073 Max Number of Namespaces: 256 00:11:30.073 Max Number of I/O Queues: 64 00:11:30.073 NVMe Specification Version (VS): 1.4 00:11:30.073 NVMe Specification Version (Identify): 1.4 00:11:30.073 Maximum Queue Entries: 2048 00:11:30.073 Contiguous Queues Required: Yes 00:11:30.073 Arbitration Mechanisms Supported 00:11:30.073 Weighted Round Robin: Not Supported 00:11:30.073 Vendor Specific: Not Supported 00:11:30.073 Reset Timeout: 7500 ms 00:11:30.073 Doorbell Stride: 4 bytes 00:11:30.073 NVM Subsystem Reset: Not Supported 00:11:30.073 Command Sets Supported 00:11:30.073 NVM Command Set: Supported 00:11:30.073 Boot Partition: Not Supported 00:11:30.073 Memory Page Size Minimum: 4096 bytes 00:11:30.074 Memory Page Size Maximum: 65536 bytes 00:11:30.074 Persistent Memory Region: Not Supported 00:11:30.074 Optional Asynchronous Events Supported 00:11:30.074 Namespace Attribute Notices: Supported 00:11:30.074 Firmware Activation Notices: Not Supported 00:11:30.074 ANA Change Notices: Not Supported 00:11:30.074 PLE Aggregate Log Change Notices: Not Supported 00:11:30.074 LBA Status Info Alert Notices: Not Supported 00:11:30.074 EGE Aggregate Log Change Notices: Not Supported 00:11:30.074 Normal NVM Subsystem Shutdown event: Not Supported 00:11:30.074 Zone Descriptor Change Notices: Not Supported 00:11:30.074 Discovery Log Change Notices: Not Supported 00:11:30.074 Controller Attributes 00:11:30.074 128-bit Host Identifier: Not Supported 00:11:30.074 Non-Operational Permissive Mode: Not Supported 00:11:30.074 NVM Sets: Not Supported 00:11:30.074 Read Recovery Levels: Not Supported 00:11:30.074 Endurance Groups: Supported 00:11:30.074 Predictable Latency Mode: Not Supported 00:11:30.074 Traffic Based Keep ALive: Not Supported 00:11:30.074 Namespace Granularity: Not Supported 00:11:30.074 SQ Associations: Not Supported 00:11:30.074 UUID List: Not Supported 00:11:30.074 Multi-Domain Subsystem: Not Supported 00:11:30.074 Fixed Capacity Management: Not Supported 00:11:30.074 Variable Capacity Management: Not Supported 00:11:30.074 Delete Endurance Group: Not Supported 00:11:30.074 Delete NVM Set: Not Supported 00:11:30.074 Extended LBA Formats Supported: Supported 00:11:30.074 Flexible Data Placement Supported: Supported 00:11:30.074 00:11:30.074 Controller Memory Buffer Support 00:11:30.074 ================================ 00:11:30.074 Supported: No 00:11:30.074 00:11:30.074 Persistent Memory Region Support 00:11:30.074 ================================ 00:11:30.074 Supported: No 00:11:30.074 00:11:30.074 Admin Command Set Attributes 00:11:30.074 ============================ 00:11:30.074 Security Send/Receive: Not Supported 00:11:30.074 Format NVM: Supported 00:11:30.074 Firmware Activate/Download: Not Supported 00:11:30.074 Namespace Management: Supported 00:11:30.074 Device Self-Test: Not Supported 00:11:30.074 Directives: Supported 00:11:30.074 NVMe-MI: Not Supported 00:11:30.074 Virtualization Management: Not Supported 00:11:30.074 Doorbell Buffer Config: Supported 00:11:30.074 Get LBA Status Capability: Not Supported 00:11:30.074 Command & Feature Lockdown Capability: Not Supported 00:11:30.074 Abort Command Limit: 4 00:11:30.074 Async Event Request Limit: 4 00:11:30.074 Number of Firmware Slots: N/A 00:11:30.074 Firmware Slot 1 Read-Only: N/A 00:11:30.074 Firmware Activation Without Reset: N/A 00:11:30.074 Multiple Update Detection Support: N/A 00:11:30.074 Firmware Update Granularity: No Information Provided 00:11:30.074 Per-Namespace SMART Log: Yes 00:11:30.074 Asymmetric Namespace Access Log Page: Not Supported 00:11:30.074 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:30.074 Command Effects Log Page: Supported 00:11:30.074 Get Log Page Extended Data: Supported 00:11:30.074 Telemetry Log Pages: Not Supported 00:11:30.074 Persistent Event Log Pages: Not Supported 00:11:30.074 Supported Log Pages Log Page: May Support 00:11:30.074 Commands Supported & Effects Log Page: Not Supported 00:11:30.074 Feature Identifiers & Effects Log Page:May Support 00:11:30.074 NVMe-MI Commands & Effects Log Page: May Support 00:11:30.074 Data Area 4 for Telemetry Log: Not Supported 00:11:30.074 Error Log Page Entries Supported: 1 00:11:30.074 Keep Alive: Not Supported 00:11:30.074 00:11:30.074 NVM Command Set Attributes 00:11:30.074 ========================== 00:11:30.074 Submission Queue Entry Size 00:11:30.074 Max: 64 00:11:30.074 Min: 64 00:11:30.074 Completion Queue Entry Size 00:11:30.074 Max: 16 00:11:30.074 Min: 16 00:11:30.074 Number of Namespaces: 256 00:11:30.074 Compare Command: Supported 00:11:30.074 Write Uncorrectable Command: Not Supported 00:11:30.074 Dataset Management Command: Supported 00:11:30.074 Write Zeroes Command: Supported 00:11:30.074 Set Features Save Field: Supported 00:11:30.074 Reservations: Not Supported 00:11:30.074 Timestamp: Supported 00:11:30.074 Copy: Supported 00:11:30.074 Volatile Write Cache: Present 00:11:30.074 Atomic Write Unit (Normal): 1 00:11:30.074 Atomic Write Unit (PFail): 1 00:11:30.074 Atomic Compare & Write Unit: 1 00:11:30.074 Fused Compare & Write: Not Supported 00:11:30.074 Scatter-Gather List 00:11:30.074 SGL Command Set: Supported 00:11:30.074 SGL Keyed: Not Supported 00:11:30.074 SGL Bit Bucket Descriptor: Not Supported 00:11:30.074 SGL Metadata Pointer: Not Supported 00:11:30.074 Oversized SGL: Not Supported 00:11:30.074 SGL Metadata Address: Not Supported 00:11:30.074 SGL Offset: Not Supported 00:11:30.074 Transport SGL Data Block: Not Supported 00:11:30.074 Replay Protected Memory Block: Not Supported 00:11:30.074 00:11:30.074 Firmware Slot Information 00:11:30.074 ========================= 00:11:30.074 Active slot: 1 00:11:30.074 Slot 1 Firmware Revision: 1.0 00:11:30.074 00:11:30.074 00:11:30.074 Commands Supported and Effects 00:11:30.074 ============================== 00:11:30.074 Admin Commands 00:11:30.074 -------------- 00:11:30.074 Delete I/O Submission Queue (00h): Supported 00:11:30.074 Create I/O Submission Queue (01h): Supported 00:11:30.074 Get Log Page (02h): Supported 00:11:30.074 Delete I/O Completion Queue (04h): Supported 00:11:30.074 Create I/O Completion Queue (05h): Supported 00:11:30.074 Identify (06h): Supported 00:11:30.074 Abort (08h): Supported 00:11:30.074 Set Features (09h): Supported 00:11:30.074 Get Features (0Ah): Supported 00:11:30.074 Asynchronous Event Request (0Ch): Supported 00:11:30.074 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:30.074 Directive Send (19h): Supported 00:11:30.074 Directive Receive (1Ah): Supported 00:11:30.074 Virtualization Management (1Ch): Supported 00:11:30.074 Doorbell Buffer Config (7Ch): Supported 00:11:30.074 Format NVM (80h): Supported LBA-Change 00:11:30.074 I/O Commands 00:11:30.074 ------------ 00:11:30.074 Flush (00h): Supported LBA-Change 00:11:30.074 Write (01h): Supported LBA-Change 00:11:30.074 Read (02h): Supported 00:11:30.074 Compare (05h): Supported 00:11:30.074 Write Zeroes (08h): Supported LBA-Change 00:11:30.074 Dataset Management (09h): Supported LBA-Change 00:11:30.074 Unknown (0Ch): Supported 00:11:30.074 Unknown (12h): Supported 00:11:30.074 Copy (19h): Supported LBA-Change 00:11:30.074 Unknown (1Dh): Supported LBA-Change 00:11:30.074 00:11:30.074 Error Log 00:11:30.074 ========= 00:11:30.074 00:11:30.074 Arbitration 00:11:30.074 =========== 00:11:30.074 Arbitration Burst: no limit 00:11:30.074 00:11:30.074 Power Management 00:11:30.074 ================ 00:11:30.074 Number of Power States: 1 00:11:30.074 Current Power State: Power State #0 00:11:30.074 Power State #0: 00:11:30.074 Max Power: 25.00 W 00:11:30.074 Non-Operational State: Operational 00:11:30.074 Entry Latency: 16 microseconds 00:11:30.074 Exit Latency: 4 microseconds 00:11:30.074 Relative Read Throughput: 0 00:11:30.074 Relative Read Latency: 0 00:11:30.074 Relative Write Throughput: 0 00:11:30.074 Relative Write Latency: 0 00:11:30.074 Idle Power: Not Reported 00:11:30.074 Active Power: Not Reported 00:11:30.074 Non-Operational Permissive Mode: Not Supported 00:11:30.074 00:11:30.074 Health Information 00:11:30.074 ================== 00:11:30.074 Critical Warnings: 00:11:30.074 Available Spare Space: OK 00:11:30.074 Temperature: OK 00:11:30.074 Device Reliability: OK 00:11:30.074 Read Only: No 00:11:30.074 Volatile Memory Backup: OK 00:11:30.074 Current Temperature: 323 Kelvin (50 Celsius) 00:11:30.074 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:30.074 Available Spare: 0% 00:11:30.074 Available Spare Threshold: 0% 00:11:30.074 Life Percentage Used: 0% 00:11:30.074 Data Units Read: 1070 00:11:30.074 Data Units Written: 999 00:11:30.074 Host Read Commands: 35319 00:11:30.074 Host Write Commands: 34745 00:11:30.074 Controller Busy Time: 0 minutes 00:11:30.074 Power Cycles: 0 00:11:30.074 Power On Hours: 0 hours 00:11:30.074 Unsafe Shutdowns: 0 00:11:30.074 Unrecoverable Media Errors: 0 00:11:30.074 Lifetime Error Log Entries: 0 00:11:30.074 Warning Temperature Time: 0 minutes 00:11:30.074 Critical Temperature Time: 0 minutes 00:11:30.074 00:11:30.074 Number of Queues 00:11:30.074 ================ 00:11:30.074 Number of I/O Submission Queues: 64 00:11:30.074 Number of I/O Completion Queues: 64 00:11:30.074 00:11:30.074 ZNS Specific Controller Data 00:11:30.074 ============================ 00:11:30.074 Zone Append Size Limit: 0 00:11:30.074 00:11:30.074 00:11:30.074 Active Namespaces 00:11:30.074 ================= 00:11:30.074 Namespace ID:1 00:11:30.074 Error Recovery Timeout: Unlimited 00:11:30.075 Command Set Identifier: NVM (00h) 00:11:30.075 Deallocate: Supported 00:11:30.075 Deallocated/Unwritten Error: Supported 00:11:30.075 Deallocated Read Value: All 0x00 00:11:30.075 Deallocate in Write Zeroes: Not Supported 00:11:30.075 Deallocated Guard Field: 0xFFFF 00:11:30.075 Flush: Supported 00:11:30.075 Reservation: Not Supported 00:11:30.075 Namespace Sharing Capabilities: Multiple Controllers 00:11:30.075 Size (in LBAs): 262144 (1GiB) 00:11:30.075 Capacity (in LBAs): 262144 (1GiB) 00:11:30.075 Utilization (in LBAs): 262144 (1GiB) 00:11:30.075 Thin Provisioning: Not Supported 00:11:30.075 Per-NS Atomic Units: No 00:11:30.075 Maximum Single Source Range Length: 128 00:11:30.075 Maximum Copy Length: 128 00:11:30.075 Maximum Source Range Count: 128 00:11:30.075 NGUID/EUI64 Never Reused: No 00:11:30.075 Namespace Write Protected: No 00:11:30.075 Endurance group ID: 1 00:11:30.075 Number of LBA Formats: 8 00:11:30.075 Current LBA Format: LBA Format #04 00:11:30.075 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:30.075 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:30.075 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:30.075 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:30.075 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:30.075 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:30.075 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:30.075 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:30.075 00:11:30.075 Get Feature FDP: 00:11:30.075 ================ 00:11:30.075 Enabled: Yes 00:11:30.075 FDP configuration index: 0 00:11:30.075 00:11:30.075 FDP configurations log page 00:11:30.075 =========================== 00:11:30.075 Number of FDP configurations: 1 00:11:30.075 Version: 0 00:11:30.075 Size: 112 00:11:30.075 FDP Configuration Descriptor: 0 00:11:30.075 Descriptor Size: 96 00:11:30.075 Reclaim Group Identifier format: 2 00:11:30.075 FDP Volatile Write Cache: Not Present 00:11:30.075 FDP Configuration: Valid 00:11:30.075 Vendor Specific Size: 0 00:11:30.075 Number of Reclaim Groups: 2 00:11:30.075 Number of Recalim Unit Handles: 8 00:11:30.075 Max Placement Identifiers: 128 00:11:30.075 Number of Namespaces Suppprted: 256 00:11:30.075 Reclaim unit Nominal Size: 6000000 bytes 00:11:30.075 Estimated Reclaim Unit Time Limit: Not Reported 00:11:30.075 RUH Desc #000: RUH Type: Initially Isolated 00:11:30.075 RUH Desc #001: RUH Type: Initially Isolated 00:11:30.075 RUH Desc #002: RUH Type: Initially Isolated 00:11:30.075 RUH Desc #003: RUH Type: Initially Isolated 00:11:30.075 RUH Desc #004: RUH Type: Initially Isolated 00:11:30.075 RUH Desc #005: RUH Type: Initially Isolated 00:11:30.075 RUH Desc #006: RUH Type: Initially Isolated 00:11:30.075 RUH Desc #007: RUH Type: Initially Isolated 00:11:30.075 00:11:30.075 FDP reclaim unit handle usage log page 00:11:30.075 ====================================== 00:11:30.075 Number of Reclaim Unit Handles: 8 00:11:30.075 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:30.075 RUH Usage Desc #001: RUH Attributes: Unused 00:11:30.075 RUH Usage Desc #002: RUH Attributes: Unused 00:11:30.075 RUH Usage Desc #003: RUH Attributes: Unused 00:11:30.075 RUH Usage Desc #004: RUH Attributes: Unused 00:11:30.075 RUH Usage Desc #005: RUH Attributes: Unused 00:11:30.075 RUH Usage Desc #006: RUH Attributes: Unused 00:11:30.075 RUH Usage Desc #007: RUH Attributes: Unused 00:11:30.075 00:11:30.075 FDP statistics log page 00:11:30.075 ======================= 00:11:30.075 Host bytes with metadata written: 626237440 00:11:30.075 Media bytes with metadata written: 626319360 00:11:30.075 Media bytes erased: 0 00:11:30.075 00:11:30.075 FDP events log page 00:11:30.075 =================== 00:11:30.075 Number of FDP events: 0 00:11:30.075 00:11:30.075 NVM Specific Namespace Data 00:11:30.075 =========================== 00:11:30.075 Logical Block Storage Tag Mask: 0 00:11:30.075 Protection Information Capabilities: 00:11:30.075 16b Guard Protection Information Storage Tag Support: No 00:11:30.075 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:30.075 Storage Tag Check Read Support: No 00:11:30.075 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.075 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.075 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.075 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.075 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.075 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.075 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.075 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.075 00:11:30.075 real 0m1.665s 00:11:30.075 user 0m0.566s 00:11:30.075 sys 0m0.866s 00:11:30.075 12:21:53 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.075 12:21:53 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:30.075 ************************************ 00:11:30.075 END TEST nvme_identify 00:11:30.075 ************************************ 00:11:30.075 12:21:53 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:30.075 12:21:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:30.075 12:21:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.075 12:21:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:30.075 ************************************ 00:11:30.075 START TEST nvme_perf 00:11:30.075 ************************************ 00:11:30.075 12:21:53 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:11:30.075 12:21:53 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:31.458 Initializing NVMe Controllers 00:11:31.458 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:31.458 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:31.458 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:31.458 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:31.458 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:31.458 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:31.458 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:31.458 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:31.458 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:31.458 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:31.458 Initialization complete. Launching workers. 00:11:31.458 ======================================================== 00:11:31.458 Latency(us) 00:11:31.458 Device Information : IOPS MiB/s Average min max 00:11:31.458 PCIE (0000:00:10.0) NSID 1 from core 0: 13582.78 159.17 9441.74 8081.46 54795.09 00:11:31.458 PCIE (0000:00:11.0) NSID 1 from core 0: 13582.78 159.17 9426.41 8144.52 53447.68 00:11:31.458 PCIE (0000:00:13.0) NSID 1 from core 0: 13582.78 159.17 9409.17 8167.36 53445.35 00:11:31.458 PCIE (0000:00:12.0) NSID 1 from core 0: 13582.78 159.17 9391.89 8166.75 52120.95 00:11:31.458 PCIE (0000:00:12.0) NSID 2 from core 0: 13582.78 159.17 9374.23 8145.37 50930.91 00:11:31.458 PCIE (0000:00:12.0) NSID 3 from core 0: 13646.54 159.92 9313.06 8184.05 38936.25 00:11:31.458 ======================================================== 00:11:31.458 Total : 81560.43 955.79 9392.69 8081.46 54795.09 00:11:31.458 00:11:31.458 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:31.458 ================================================================================= 00:11:31.458 1.00000% : 8317.018us 00:11:31.458 10.00000% : 8527.576us 00:11:31.458 25.00000% : 8685.494us 00:11:31.458 50.00000% : 8948.691us 00:11:31.458 75.00000% : 9211.888us 00:11:31.458 90.00000% : 9475.084us 00:11:31.458 95.00000% : 9790.920us 00:11:31.458 98.00000% : 14423.184us 00:11:31.458 99.00000% : 19687.120us 00:11:31.458 99.50000% : 44427.618us 00:11:31.458 99.90000% : 54744.932us 00:11:31.458 99.99000% : 54744.932us 00:11:31.458 99.99900% : 55166.047us 00:11:31.458 99.99990% : 55166.047us 00:11:31.458 99.99999% : 55166.047us 00:11:31.458 00:11:31.458 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:31.458 ================================================================================= 00:11:31.458 1.00000% : 8369.658us 00:11:31.458 10.00000% : 8580.215us 00:11:31.458 25.00000% : 8738.133us 00:11:31.458 50.00000% : 8948.691us 00:11:31.458 75.00000% : 9211.888us 00:11:31.458 90.00000% : 9422.445us 00:11:31.458 95.00000% : 9738.281us 00:11:31.458 98.00000% : 14002.069us 00:11:31.458 99.00000% : 19792.398us 00:11:31.458 99.50000% : 42743.158us 00:11:31.458 99.90000% : 53271.030us 00:11:31.458 99.99000% : 53481.587us 00:11:31.458 99.99900% : 53481.587us 00:11:31.458 99.99990% : 53481.587us 00:11:31.458 99.99999% : 53481.587us 00:11:31.458 00:11:31.458 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:31.458 ================================================================================= 00:11:31.458 1.00000% : 8369.658us 00:11:31.458 10.00000% : 8580.215us 00:11:31.458 25.00000% : 8738.133us 00:11:31.458 50.00000% : 8948.691us 00:11:31.458 75.00000% : 9211.888us 00:11:31.458 90.00000% : 9422.445us 00:11:31.458 95.00000% : 9738.281us 00:11:31.458 98.00000% : 13423.036us 00:11:31.458 99.00000% : 20318.792us 00:11:31.458 99.50000% : 41269.256us 00:11:31.458 99.90000% : 53271.030us 00:11:31.458 99.99000% : 53481.587us 00:11:31.458 99.99900% : 53481.587us 00:11:31.458 99.99990% : 53481.587us 00:11:31.458 99.99999% : 53481.587us 00:11:31.458 00:11:31.458 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:31.458 ================================================================================= 00:11:31.458 1.00000% : 8422.297us 00:11:31.458 10.00000% : 8580.215us 00:11:31.458 25.00000% : 8738.133us 00:11:31.458 50.00000% : 8948.691us 00:11:31.459 75.00000% : 9211.888us 00:11:31.459 90.00000% : 9422.445us 00:11:31.459 95.00000% : 9738.281us 00:11:31.459 98.00000% : 13107.200us 00:11:31.459 99.00000% : 20634.628us 00:11:31.459 99.50000% : 39374.239us 00:11:31.459 99.90000% : 51797.128us 00:11:31.459 99.99000% : 52218.243us 00:11:31.459 99.99900% : 52218.243us 00:11:31.459 99.99990% : 52218.243us 00:11:31.459 99.99999% : 52218.243us 00:11:31.459 00:11:31.459 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:31.459 ================================================================================= 00:11:31.459 1.00000% : 8369.658us 00:11:31.459 10.00000% : 8580.215us 00:11:31.459 25.00000% : 8738.133us 00:11:31.459 50.00000% : 8948.691us 00:11:31.459 75.00000% : 9211.888us 00:11:31.459 90.00000% : 9422.445us 00:11:31.459 95.00000% : 9738.281us 00:11:31.459 98.00000% : 12791.364us 00:11:31.459 99.00000% : 19476.562us 00:11:31.459 99.50000% : 38742.567us 00:11:31.459 99.90000% : 50744.341us 00:11:31.459 99.99000% : 50954.898us 00:11:31.459 99.99900% : 50954.898us 00:11:31.459 99.99990% : 50954.898us 00:11:31.459 99.99999% : 50954.898us 00:11:31.459 00:11:31.459 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:31.459 ================================================================================= 00:11:31.459 1.00000% : 8369.658us 00:11:31.459 10.00000% : 8580.215us 00:11:31.459 25.00000% : 8738.133us 00:11:31.459 50.00000% : 8948.691us 00:11:31.459 75.00000% : 9211.888us 00:11:31.459 90.00000% : 9422.445us 00:11:31.459 95.00000% : 9790.920us 00:11:31.459 98.00000% : 14739.020us 00:11:31.459 99.00000% : 18950.169us 00:11:31.459 99.50000% : 29899.155us 00:11:31.459 99.90000% : 38742.567us 00:11:31.459 99.99000% : 38953.124us 00:11:31.459 99.99900% : 38953.124us 00:11:31.459 99.99990% : 38953.124us 00:11:31.459 99.99999% : 38953.124us 00:11:31.459 00:11:31.459 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:31.459 ============================================================================== 00:11:31.459 Range in us Cumulative IO count 00:11:31.459 8053.822 - 8106.461: 0.0440% ( 6) 00:11:31.459 8159.100 - 8211.740: 0.1834% ( 19) 00:11:31.459 8211.740 - 8264.379: 0.5428% ( 49) 00:11:31.459 8264.379 - 8317.018: 1.9073% ( 186) 00:11:31.459 8317.018 - 8369.658: 3.9173% ( 274) 00:11:31.459 8369.658 - 8422.297: 6.6975% ( 379) 00:11:31.459 8422.297 - 8474.937: 9.9325% ( 441) 00:11:31.459 8474.937 - 8527.576: 14.0478% ( 561) 00:11:31.459 8527.576 - 8580.215: 18.1778% ( 563) 00:11:31.459 8580.215 - 8632.855: 22.4545% ( 583) 00:11:31.459 8632.855 - 8685.494: 27.0613% ( 628) 00:11:31.459 8685.494 - 8738.133: 31.8295% ( 650) 00:11:31.459 8738.133 - 8790.773: 36.4950% ( 636) 00:11:31.459 8790.773 - 8843.412: 41.2265% ( 645) 00:11:31.459 8843.412 - 8896.051: 46.0241% ( 654) 00:11:31.459 8896.051 - 8948.691: 50.9463% ( 671) 00:11:31.459 8948.691 - 9001.330: 55.7732% ( 658) 00:11:31.459 9001.330 - 9053.969: 60.6587% ( 666) 00:11:31.459 9053.969 - 9106.609: 65.7497% ( 694) 00:11:31.459 9106.609 - 9159.248: 70.6353% ( 666) 00:11:31.459 9159.248 - 9211.888: 75.4621% ( 658) 00:11:31.459 9211.888 - 9264.527: 80.0029% ( 619) 00:11:31.459 9264.527 - 9317.166: 83.6708% ( 500) 00:11:31.459 9317.166 - 9369.806: 86.7664% ( 422) 00:11:31.459 9369.806 - 9422.445: 89.1359% ( 323) 00:11:31.459 9422.445 - 9475.084: 90.9771% ( 251) 00:11:31.459 9475.084 - 9527.724: 92.2975% ( 180) 00:11:31.459 9527.724 - 9580.363: 93.3539% ( 144) 00:11:31.459 9580.363 - 9633.002: 93.9994% ( 88) 00:11:31.459 9633.002 - 9685.642: 94.4616% ( 63) 00:11:31.459 9685.642 - 9738.281: 94.8504% ( 53) 00:11:31.459 9738.281 - 9790.920: 95.1658% ( 43) 00:11:31.459 9790.920 - 9843.560: 95.4225% ( 35) 00:11:31.459 9843.560 - 9896.199: 95.7086% ( 39) 00:11:31.459 9896.199 - 9948.839: 95.9067% ( 27) 00:11:31.459 9948.839 - 10001.478: 96.1268% ( 30) 00:11:31.459 10001.478 - 10054.117: 96.3028% ( 24) 00:11:31.459 10054.117 - 10106.757: 96.4349% ( 18) 00:11:31.459 10106.757 - 10159.396: 96.4862% ( 7) 00:11:31.459 10159.396 - 10212.035: 96.5156% ( 4) 00:11:31.459 10212.035 - 10264.675: 96.5449% ( 4) 00:11:31.459 10264.675 - 10317.314: 96.5669% ( 3) 00:11:31.459 10317.314 - 10369.953: 96.6109% ( 6) 00:11:31.459 10369.953 - 10422.593: 96.6329% ( 3) 00:11:31.459 10422.593 - 10475.232: 96.6843% ( 7) 00:11:31.459 10475.232 - 10527.871: 96.6989% ( 2) 00:11:31.459 10527.871 - 10580.511: 96.7430% ( 6) 00:11:31.459 10580.511 - 10633.150: 96.7723% ( 4) 00:11:31.459 10633.150 - 10685.790: 96.8016% ( 4) 00:11:31.459 10685.790 - 10738.429: 96.8310% ( 4) 00:11:31.459 10738.429 - 10791.068: 96.8750% ( 6) 00:11:31.459 10791.068 - 10843.708: 96.8970% ( 3) 00:11:31.459 10843.708 - 10896.347: 96.9484% ( 7) 00:11:31.459 10896.347 - 10948.986: 96.9704% ( 3) 00:11:31.459 10948.986 - 11001.626: 97.0290% ( 8) 00:11:31.459 11001.626 - 11054.265: 97.0804% ( 7) 00:11:31.459 11054.265 - 11106.904: 97.1244% ( 6) 00:11:31.459 11106.904 - 11159.544: 97.1758% ( 7) 00:11:31.459 11159.544 - 11212.183: 97.2198% ( 6) 00:11:31.459 11212.183 - 11264.822: 97.2491% ( 4) 00:11:31.459 11264.822 - 11317.462: 97.2711% ( 3) 00:11:31.459 11317.462 - 11370.101: 97.3078% ( 5) 00:11:31.459 11370.101 - 11422.741: 97.3225% ( 2) 00:11:31.459 11422.741 - 11475.380: 97.3445% ( 3) 00:11:31.459 11475.380 - 11528.019: 97.3592% ( 2) 00:11:31.459 11528.019 - 11580.659: 97.3665% ( 1) 00:11:31.459 11580.659 - 11633.298: 97.3885% ( 3) 00:11:31.459 11633.298 - 11685.937: 97.4105% ( 3) 00:11:31.459 11685.937 - 11738.577: 97.4178% ( 1) 00:11:31.459 11738.577 - 11791.216: 97.4325% ( 2) 00:11:31.459 11791.216 - 11843.855: 97.4472% ( 2) 00:11:31.459 11843.855 - 11896.495: 97.4692% ( 3) 00:11:31.459 11896.495 - 11949.134: 97.4839% ( 2) 00:11:31.459 11949.134 - 12001.773: 97.4985% ( 2) 00:11:31.459 12001.773 - 12054.413: 97.5132% ( 2) 00:11:31.459 12054.413 - 12107.052: 97.5352% ( 3) 00:11:31.459 12107.052 - 12159.692: 97.5425% ( 1) 00:11:31.459 12159.692 - 12212.331: 97.5572% ( 2) 00:11:31.459 12212.331 - 12264.970: 97.5719% ( 2) 00:11:31.459 12264.970 - 12317.610: 97.5939% ( 3) 00:11:31.459 12317.610 - 12370.249: 97.6159% ( 3) 00:11:31.459 12370.249 - 12422.888: 97.6232% ( 1) 00:11:31.459 12422.888 - 12475.528: 97.6452% ( 3) 00:11:31.459 12475.528 - 12528.167: 97.6526% ( 1) 00:11:31.459 13265.118 - 13317.757: 97.6673% ( 2) 00:11:31.459 13317.757 - 13370.397: 97.6893% ( 3) 00:11:31.459 13370.397 - 13423.036: 97.7039% ( 2) 00:11:31.459 13423.036 - 13475.676: 97.7186% ( 2) 00:11:31.459 13475.676 - 13580.954: 97.7553% ( 5) 00:11:31.459 13580.954 - 13686.233: 97.7920% ( 5) 00:11:31.459 13686.233 - 13791.512: 97.8213% ( 4) 00:11:31.459 13791.512 - 13896.790: 97.8653% ( 6) 00:11:31.459 13896.790 - 14002.069: 97.8947% ( 4) 00:11:31.459 14002.069 - 14107.348: 97.9240% ( 4) 00:11:31.459 14107.348 - 14212.627: 97.9607% ( 5) 00:11:31.459 14212.627 - 14317.905: 97.9974% ( 5) 00:11:31.459 14317.905 - 14423.184: 98.0267% ( 4) 00:11:31.459 14423.184 - 14528.463: 98.0634% ( 5) 00:11:31.459 14528.463 - 14633.741: 98.1001% ( 5) 00:11:31.459 14633.741 - 14739.020: 98.1221% ( 3) 00:11:31.459 17581.545 - 17686.824: 98.1514% ( 4) 00:11:31.459 17686.824 - 17792.103: 98.1808% ( 4) 00:11:31.459 17792.103 - 17897.382: 98.2174% ( 5) 00:11:31.459 17897.382 - 18002.660: 98.2688% ( 7) 00:11:31.459 18002.660 - 18107.939: 98.3055% ( 5) 00:11:31.459 18107.939 - 18213.218: 98.3348% ( 4) 00:11:31.459 18213.218 - 18318.496: 98.3788% ( 6) 00:11:31.459 18318.496 - 18423.775: 98.4082% ( 4) 00:11:31.459 18423.775 - 18529.054: 98.4522% ( 6) 00:11:31.459 18529.054 - 18634.333: 98.4888% ( 5) 00:11:31.459 18634.333 - 18739.611: 98.5622% ( 10) 00:11:31.459 18739.611 - 18844.890: 98.6429% ( 11) 00:11:31.459 18844.890 - 18950.169: 98.7163% ( 10) 00:11:31.459 18950.169 - 19055.447: 98.7603% ( 6) 00:11:31.459 19055.447 - 19160.726: 98.8043% ( 6) 00:11:31.459 19160.726 - 19266.005: 98.8556% ( 7) 00:11:31.459 19266.005 - 19371.284: 98.8923% ( 5) 00:11:31.459 19371.284 - 19476.562: 98.9437% ( 7) 00:11:31.459 19476.562 - 19581.841: 98.9877% ( 6) 00:11:31.459 19581.841 - 19687.120: 99.0317% ( 6) 00:11:31.459 19687.120 - 19792.398: 99.0610% ( 4) 00:11:31.459 42532.601 - 42743.158: 99.0904% ( 4) 00:11:31.459 42743.158 - 42953.716: 99.1344% ( 6) 00:11:31.459 42953.716 - 43164.273: 99.1931% ( 8) 00:11:31.459 43164.273 - 43374.831: 99.2518% ( 8) 00:11:31.459 43374.831 - 43585.388: 99.2958% ( 6) 00:11:31.459 43585.388 - 43795.945: 99.3618% ( 9) 00:11:31.459 43795.945 - 44006.503: 99.4131% ( 7) 00:11:31.459 44006.503 - 44217.060: 99.4645% ( 7) 00:11:31.459 44217.060 - 44427.618: 99.5158% ( 7) 00:11:31.459 44427.618 - 44638.175: 99.5305% ( 2) 00:11:31.459 52849.915 - 53060.472: 99.5892% ( 8) 00:11:31.459 53060.472 - 53271.030: 99.6406% ( 7) 00:11:31.459 53271.030 - 53481.587: 99.6919% ( 7) 00:11:31.459 53481.587 - 53692.145: 99.7359% ( 6) 00:11:31.459 53692.145 - 53902.702: 99.7946% ( 8) 00:11:31.459 53902.702 - 54323.817: 99.8973% ( 14) 00:11:31.459 54323.817 - 54744.932: 99.9927% ( 13) 00:11:31.459 54744.932 - 55166.047: 100.0000% ( 1) 00:11:31.459 00:11:31.459 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:31.459 ============================================================================== 00:11:31.459 Range in us Cumulative IO count 00:11:31.459 8106.461 - 8159.100: 0.0147% ( 2) 00:11:31.459 8159.100 - 8211.740: 0.0513% ( 5) 00:11:31.459 8211.740 - 8264.379: 0.0807% ( 4) 00:11:31.459 8264.379 - 8317.018: 0.3668% ( 39) 00:11:31.459 8317.018 - 8369.658: 1.2764% ( 124) 00:11:31.459 8369.658 - 8422.297: 3.0076% ( 236) 00:11:31.459 8422.297 - 8474.937: 5.6778% ( 364) 00:11:31.459 8474.937 - 8527.576: 9.6317% ( 539) 00:11:31.459 8527.576 - 8580.215: 13.6884% ( 553) 00:11:31.459 8580.215 - 8632.855: 18.5886% ( 668) 00:11:31.459 8632.855 - 8685.494: 23.9143% ( 726) 00:11:31.459 8685.494 - 8738.133: 29.3427% ( 740) 00:11:31.459 8738.133 - 8790.773: 34.8812% ( 755) 00:11:31.459 8790.773 - 8843.412: 40.3536% ( 746) 00:11:31.460 8843.412 - 8896.051: 45.9727% ( 766) 00:11:31.460 8896.051 - 8948.691: 51.6505% ( 774) 00:11:31.460 8948.691 - 9001.330: 57.4531% ( 791) 00:11:31.460 9001.330 - 9053.969: 63.1382% ( 775) 00:11:31.460 9053.969 - 9106.609: 68.8234% ( 775) 00:11:31.460 9106.609 - 9159.248: 74.3618% ( 755) 00:11:31.460 9159.248 - 9211.888: 79.2840% ( 671) 00:11:31.460 9211.888 - 9264.527: 83.4287% ( 565) 00:11:31.460 9264.527 - 9317.166: 86.6857% ( 444) 00:11:31.460 9317.166 - 9369.806: 89.2165% ( 345) 00:11:31.460 9369.806 - 9422.445: 91.1018% ( 257) 00:11:31.460 9422.445 - 9475.084: 92.4002% ( 177) 00:11:31.460 9475.084 - 9527.724: 93.3172% ( 125) 00:11:31.460 9527.724 - 9580.363: 93.8967% ( 79) 00:11:31.460 9580.363 - 9633.002: 94.3882% ( 67) 00:11:31.460 9633.002 - 9685.642: 94.7697% ( 52) 00:11:31.460 9685.642 - 9738.281: 95.0704% ( 41) 00:11:31.460 9738.281 - 9790.920: 95.3859% ( 43) 00:11:31.460 9790.920 - 9843.560: 95.6573% ( 37) 00:11:31.460 9843.560 - 9896.199: 95.8553% ( 27) 00:11:31.460 9896.199 - 9948.839: 96.0314% ( 24) 00:11:31.460 9948.839 - 10001.478: 96.1634% ( 18) 00:11:31.460 10001.478 - 10054.117: 96.2588% ( 13) 00:11:31.460 10054.117 - 10106.757: 96.3395% ( 11) 00:11:31.460 10106.757 - 10159.396: 96.3982% ( 8) 00:11:31.460 10159.396 - 10212.035: 96.4349% ( 5) 00:11:31.460 10212.035 - 10264.675: 96.4495% ( 2) 00:11:31.460 10264.675 - 10317.314: 96.4715% ( 3) 00:11:31.460 10317.314 - 10369.953: 96.4935% ( 3) 00:11:31.460 10369.953 - 10422.593: 96.5376% ( 6) 00:11:31.460 10422.593 - 10475.232: 96.5816% ( 6) 00:11:31.460 10475.232 - 10527.871: 96.6109% ( 4) 00:11:31.460 10527.871 - 10580.511: 96.6476% ( 5) 00:11:31.460 10580.511 - 10633.150: 96.6916% ( 6) 00:11:31.460 10633.150 - 10685.790: 96.7210% ( 4) 00:11:31.460 10685.790 - 10738.429: 96.7650% ( 6) 00:11:31.460 10738.429 - 10791.068: 96.8163% ( 7) 00:11:31.460 10791.068 - 10843.708: 96.8750% ( 8) 00:11:31.460 10843.708 - 10896.347: 96.9337% ( 8) 00:11:31.460 10896.347 - 10948.986: 96.9777% ( 6) 00:11:31.460 10948.986 - 11001.626: 97.0364% ( 8) 00:11:31.460 11001.626 - 11054.265: 97.0731% ( 5) 00:11:31.460 11054.265 - 11106.904: 97.1097% ( 5) 00:11:31.460 11106.904 - 11159.544: 97.1464% ( 5) 00:11:31.460 11159.544 - 11212.183: 97.1831% ( 5) 00:11:31.460 11212.183 - 11264.822: 97.2198% ( 5) 00:11:31.460 11264.822 - 11317.462: 97.2565% ( 5) 00:11:31.460 11317.462 - 11370.101: 97.3005% ( 6) 00:11:31.460 11370.101 - 11422.741: 97.3371% ( 5) 00:11:31.460 11422.741 - 11475.380: 97.3738% ( 5) 00:11:31.460 11475.380 - 11528.019: 97.4032% ( 4) 00:11:31.460 11528.019 - 11580.659: 97.4398% ( 5) 00:11:31.460 11580.659 - 11633.298: 97.4692% ( 4) 00:11:31.460 11633.298 - 11685.937: 97.5132% ( 6) 00:11:31.460 11685.937 - 11738.577: 97.5425% ( 4) 00:11:31.460 11738.577 - 11791.216: 97.5646% ( 3) 00:11:31.460 11791.216 - 11843.855: 97.5866% ( 3) 00:11:31.460 11843.855 - 11896.495: 97.6012% ( 2) 00:11:31.460 11896.495 - 11949.134: 97.6232% ( 3) 00:11:31.460 11949.134 - 12001.773: 97.6452% ( 3) 00:11:31.460 12001.773 - 12054.413: 97.6526% ( 1) 00:11:31.460 13001.921 - 13054.561: 97.6673% ( 2) 00:11:31.460 13054.561 - 13107.200: 97.6893% ( 3) 00:11:31.460 13107.200 - 13159.839: 97.7039% ( 2) 00:11:31.460 13159.839 - 13212.479: 97.7259% ( 3) 00:11:31.460 13212.479 - 13265.118: 97.7479% ( 3) 00:11:31.460 13265.118 - 13317.757: 97.7700% ( 3) 00:11:31.460 13317.757 - 13370.397: 97.7846% ( 2) 00:11:31.460 13370.397 - 13423.036: 97.8066% ( 3) 00:11:31.460 13423.036 - 13475.676: 97.8286% ( 3) 00:11:31.460 13475.676 - 13580.954: 97.8727% ( 6) 00:11:31.460 13580.954 - 13686.233: 97.9167% ( 6) 00:11:31.460 13686.233 - 13791.512: 97.9533% ( 5) 00:11:31.460 13791.512 - 13896.790: 97.9974% ( 6) 00:11:31.460 13896.790 - 14002.069: 98.0414% ( 6) 00:11:31.460 14002.069 - 14107.348: 98.0854% ( 6) 00:11:31.460 14107.348 - 14212.627: 98.1221% ( 5) 00:11:31.460 17265.709 - 17370.988: 98.1661% ( 6) 00:11:31.460 17370.988 - 17476.267: 98.2101% ( 6) 00:11:31.460 17476.267 - 17581.545: 98.2614% ( 7) 00:11:31.460 17581.545 - 17686.824: 98.3055% ( 6) 00:11:31.460 17686.824 - 17792.103: 98.3568% ( 7) 00:11:31.460 17792.103 - 17897.382: 98.4008% ( 6) 00:11:31.460 17897.382 - 18002.660: 98.4448% ( 6) 00:11:31.460 18002.660 - 18107.939: 98.4888% ( 6) 00:11:31.460 18107.939 - 18213.218: 98.5402% ( 7) 00:11:31.460 18213.218 - 18318.496: 98.5842% ( 6) 00:11:31.460 18318.496 - 18423.775: 98.5915% ( 1) 00:11:31.460 18844.890 - 18950.169: 98.5989% ( 1) 00:11:31.460 18950.169 - 19055.447: 98.6502% ( 7) 00:11:31.460 19055.447 - 19160.726: 98.7016% ( 7) 00:11:31.460 19160.726 - 19266.005: 98.7603% ( 8) 00:11:31.460 19266.005 - 19371.284: 98.8116% ( 7) 00:11:31.460 19371.284 - 19476.562: 98.8630% ( 7) 00:11:31.460 19476.562 - 19581.841: 98.9143% ( 7) 00:11:31.460 19581.841 - 19687.120: 98.9730% ( 8) 00:11:31.460 19687.120 - 19792.398: 99.0170% ( 6) 00:11:31.460 19792.398 - 19897.677: 99.0610% ( 6) 00:11:31.460 40848.141 - 41058.699: 99.1124% ( 7) 00:11:31.460 41058.699 - 41269.256: 99.1711% ( 8) 00:11:31.460 41269.256 - 41479.814: 99.2224% ( 7) 00:11:31.460 41479.814 - 41690.371: 99.2738% ( 7) 00:11:31.460 41690.371 - 41900.929: 99.3251% ( 7) 00:11:31.460 41900.929 - 42111.486: 99.3765% ( 7) 00:11:31.460 42111.486 - 42322.043: 99.4352% ( 8) 00:11:31.460 42322.043 - 42532.601: 99.4865% ( 7) 00:11:31.460 42532.601 - 42743.158: 99.5305% ( 6) 00:11:31.460 51586.570 - 51797.128: 99.5525% ( 3) 00:11:31.460 51797.128 - 52007.685: 99.6112% ( 8) 00:11:31.460 52007.685 - 52218.243: 99.6552% ( 6) 00:11:31.460 52218.243 - 52428.800: 99.7139% ( 8) 00:11:31.460 52428.800 - 52639.357: 99.7799% ( 9) 00:11:31.460 52639.357 - 52849.915: 99.8313% ( 7) 00:11:31.460 52849.915 - 53060.472: 99.8900% ( 8) 00:11:31.460 53060.472 - 53271.030: 99.9487% ( 8) 00:11:31.460 53271.030 - 53481.587: 100.0000% ( 7) 00:11:31.460 00:11:31.460 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:31.460 ============================================================================== 00:11:31.460 Range in us Cumulative IO count 00:11:31.460 8159.100 - 8211.740: 0.0367% ( 5) 00:11:31.460 8211.740 - 8264.379: 0.1467% ( 15) 00:11:31.460 8264.379 - 8317.018: 0.4108% ( 36) 00:11:31.460 8317.018 - 8369.658: 1.3498% ( 128) 00:11:31.460 8369.658 - 8422.297: 3.0883% ( 237) 00:11:31.460 8422.297 - 8474.937: 5.8392% ( 375) 00:11:31.460 8474.937 - 8527.576: 9.4924% ( 498) 00:11:31.460 8527.576 - 8580.215: 13.6150% ( 562) 00:11:31.460 8580.215 - 8632.855: 18.5739% ( 676) 00:11:31.460 8632.855 - 8685.494: 23.8556% ( 720) 00:11:31.460 8685.494 - 8738.133: 29.2254% ( 732) 00:11:31.460 8738.133 - 8790.773: 34.8592% ( 768) 00:11:31.460 8790.773 - 8843.412: 40.3169% ( 744) 00:11:31.460 8843.412 - 8896.051: 45.9580% ( 769) 00:11:31.460 8896.051 - 8948.691: 51.7459% ( 789) 00:11:31.460 8948.691 - 9001.330: 57.4824% ( 782) 00:11:31.460 9001.330 - 9053.969: 63.2702% ( 789) 00:11:31.460 9053.969 - 9106.609: 69.0434% ( 787) 00:11:31.460 9106.609 - 9159.248: 74.5892% ( 756) 00:11:31.460 9159.248 - 9211.888: 79.5701% ( 679) 00:11:31.460 9211.888 - 9264.527: 83.6414% ( 555) 00:11:31.460 9264.527 - 9317.166: 86.9425% ( 450) 00:11:31.460 9317.166 - 9369.806: 89.3779% ( 332) 00:11:31.460 9369.806 - 9422.445: 91.1972% ( 248) 00:11:31.460 9422.445 - 9475.084: 92.4589% ( 172) 00:11:31.460 9475.084 - 9527.724: 93.3172% ( 117) 00:11:31.460 9527.724 - 9580.363: 93.9114% ( 81) 00:11:31.460 9580.363 - 9633.002: 94.4029% ( 67) 00:11:31.460 9633.002 - 9685.642: 94.7990% ( 54) 00:11:31.460 9685.642 - 9738.281: 95.1951% ( 54) 00:11:31.460 9738.281 - 9790.920: 95.5252% ( 45) 00:11:31.460 9790.920 - 9843.560: 95.7746% ( 34) 00:11:31.460 9843.560 - 9896.199: 95.9800% ( 28) 00:11:31.460 9896.199 - 9948.839: 96.1194% ( 19) 00:11:31.460 9948.839 - 10001.478: 96.2441% ( 17) 00:11:31.460 10001.478 - 10054.117: 96.3395% ( 13) 00:11:31.460 10054.117 - 10106.757: 96.3982% ( 8) 00:11:31.460 10106.757 - 10159.396: 96.4642% ( 9) 00:11:31.460 10159.396 - 10212.035: 96.5229% ( 8) 00:11:31.460 10212.035 - 10264.675: 96.5889% ( 9) 00:11:31.460 10264.675 - 10317.314: 96.6329% ( 6) 00:11:31.460 10317.314 - 10369.953: 96.6769% ( 6) 00:11:31.460 10369.953 - 10422.593: 96.7283% ( 7) 00:11:31.460 10422.593 - 10475.232: 96.7870% ( 8) 00:11:31.460 10475.232 - 10527.871: 96.8383% ( 7) 00:11:31.460 10527.871 - 10580.511: 96.9043% ( 9) 00:11:31.460 10580.511 - 10633.150: 96.9630% ( 8) 00:11:31.460 10633.150 - 10685.790: 97.0217% ( 8) 00:11:31.460 10685.790 - 10738.429: 97.0584% ( 5) 00:11:31.460 10738.429 - 10791.068: 97.0951% ( 5) 00:11:31.460 10791.068 - 10843.708: 97.1244% ( 4) 00:11:31.460 10843.708 - 10896.347: 97.1611% ( 5) 00:11:31.460 10896.347 - 10948.986: 97.1978% ( 5) 00:11:31.460 10948.986 - 11001.626: 97.2344% ( 5) 00:11:31.460 11001.626 - 11054.265: 97.2638% ( 4) 00:11:31.460 11054.265 - 11106.904: 97.3005% ( 5) 00:11:31.460 11106.904 - 11159.544: 97.3371% ( 5) 00:11:31.460 11159.544 - 11212.183: 97.3812% ( 6) 00:11:31.460 11212.183 - 11264.822: 97.4178% ( 5) 00:11:31.460 11264.822 - 11317.462: 97.4398% ( 3) 00:11:31.460 11317.462 - 11370.101: 97.4839% ( 6) 00:11:31.460 11370.101 - 11422.741: 97.5132% ( 4) 00:11:31.460 11422.741 - 11475.380: 97.5499% ( 5) 00:11:31.460 11475.380 - 11528.019: 97.5939% ( 6) 00:11:31.460 11528.019 - 11580.659: 97.6232% ( 4) 00:11:31.460 11580.659 - 11633.298: 97.6526% ( 4) 00:11:31.460 12528.167 - 12580.806: 97.6746% ( 3) 00:11:31.460 12580.806 - 12633.446: 97.6966% ( 3) 00:11:31.460 12633.446 - 12686.085: 97.7186% ( 3) 00:11:31.460 12686.085 - 12738.724: 97.7406% ( 3) 00:11:31.460 12738.724 - 12791.364: 97.7626% ( 3) 00:11:31.460 12791.364 - 12844.003: 97.7846% ( 3) 00:11:31.460 12844.003 - 12896.643: 97.8066% ( 3) 00:11:31.460 12896.643 - 12949.282: 97.8213% ( 2) 00:11:31.460 12949.282 - 13001.921: 97.8433% ( 3) 00:11:31.460 13001.921 - 13054.561: 97.8653% ( 3) 00:11:31.460 13054.561 - 13107.200: 97.8873% ( 3) 00:11:31.460 13107.200 - 13159.839: 97.9093% ( 3) 00:11:31.460 13159.839 - 13212.479: 97.9313% ( 3) 00:11:31.461 13212.479 - 13265.118: 97.9460% ( 2) 00:11:31.461 13265.118 - 13317.757: 97.9680% ( 3) 00:11:31.461 13317.757 - 13370.397: 97.9900% ( 3) 00:11:31.461 13370.397 - 13423.036: 98.0120% ( 3) 00:11:31.461 13423.036 - 13475.676: 98.0340% ( 3) 00:11:31.461 13475.676 - 13580.954: 98.0707% ( 5) 00:11:31.461 13580.954 - 13686.233: 98.1147% ( 6) 00:11:31.461 13686.233 - 13791.512: 98.1221% ( 1) 00:11:31.461 16002.365 - 16107.643: 98.1367% ( 2) 00:11:31.461 16107.643 - 16212.922: 98.1808% ( 6) 00:11:31.461 16212.922 - 16318.201: 98.2248% ( 6) 00:11:31.461 16318.201 - 16423.480: 98.2614% ( 5) 00:11:31.461 16423.480 - 16528.758: 98.3055% ( 6) 00:11:31.461 16528.758 - 16634.037: 98.3568% ( 7) 00:11:31.461 16634.037 - 16739.316: 98.4008% ( 6) 00:11:31.461 16739.316 - 16844.594: 98.4448% ( 6) 00:11:31.461 16844.594 - 16949.873: 98.4815% ( 5) 00:11:31.461 16949.873 - 17055.152: 98.5329% ( 7) 00:11:31.461 17055.152 - 17160.431: 98.5769% ( 6) 00:11:31.461 17160.431 - 17265.709: 98.5915% ( 2) 00:11:31.461 19371.284 - 19476.562: 98.6136% ( 3) 00:11:31.461 19476.562 - 19581.841: 98.6722% ( 8) 00:11:31.461 19581.841 - 19687.120: 98.7163% ( 6) 00:11:31.461 19687.120 - 19792.398: 98.7749% ( 8) 00:11:31.461 19792.398 - 19897.677: 98.8263% ( 7) 00:11:31.461 19897.677 - 20002.956: 98.8776% ( 7) 00:11:31.461 20002.956 - 20108.235: 98.9290% ( 7) 00:11:31.461 20108.235 - 20213.513: 98.9877% ( 8) 00:11:31.461 20213.513 - 20318.792: 99.0390% ( 7) 00:11:31.461 20318.792 - 20424.071: 99.0610% ( 3) 00:11:31.461 39374.239 - 39584.797: 99.0904% ( 4) 00:11:31.461 39584.797 - 39795.354: 99.1417% ( 7) 00:11:31.461 39795.354 - 40005.912: 99.2004% ( 8) 00:11:31.461 40005.912 - 40216.469: 99.2518% ( 7) 00:11:31.461 40216.469 - 40427.027: 99.3104% ( 8) 00:11:31.461 40427.027 - 40637.584: 99.3618% ( 7) 00:11:31.461 40637.584 - 40848.141: 99.4205% ( 8) 00:11:31.461 40848.141 - 41058.699: 99.4718% ( 7) 00:11:31.461 41058.699 - 41269.256: 99.5232% ( 7) 00:11:31.461 41269.256 - 41479.814: 99.5305% ( 1) 00:11:31.461 51586.570 - 51797.128: 99.5892% ( 8) 00:11:31.461 51797.128 - 52007.685: 99.6406% ( 7) 00:11:31.461 52007.685 - 52218.243: 99.6846% ( 6) 00:11:31.461 52218.243 - 52428.800: 99.7359% ( 7) 00:11:31.461 52428.800 - 52639.357: 99.7946% ( 8) 00:11:31.461 52639.357 - 52849.915: 99.8460% ( 7) 00:11:31.461 52849.915 - 53060.472: 99.8973% ( 7) 00:11:31.461 53060.472 - 53271.030: 99.9560% ( 8) 00:11:31.461 53271.030 - 53481.587: 100.0000% ( 6) 00:11:31.461 00:11:31.461 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:31.461 ============================================================================== 00:11:31.461 Range in us Cumulative IO count 00:11:31.461 8159.100 - 8211.740: 0.0440% ( 6) 00:11:31.461 8211.740 - 8264.379: 0.1467% ( 14) 00:11:31.461 8264.379 - 8317.018: 0.3154% ( 23) 00:11:31.461 8317.018 - 8369.658: 0.8876% ( 78) 00:11:31.461 8369.658 - 8422.297: 2.8683% ( 270) 00:11:31.461 8422.297 - 8474.937: 5.9639% ( 422) 00:11:31.461 8474.937 - 8527.576: 9.3530% ( 462) 00:11:31.461 8527.576 - 8580.215: 13.6370% ( 584) 00:11:31.461 8580.215 - 8632.855: 18.4566% ( 657) 00:11:31.461 8632.855 - 8685.494: 23.7896% ( 727) 00:11:31.461 8685.494 - 8738.133: 29.2254% ( 741) 00:11:31.461 8738.133 - 8790.773: 34.6171% ( 735) 00:11:31.461 8790.773 - 8843.412: 40.2069% ( 762) 00:11:31.461 8843.412 - 8896.051: 45.9727% ( 786) 00:11:31.461 8896.051 - 8948.691: 51.6872% ( 779) 00:11:31.461 8948.691 - 9001.330: 57.4090% ( 780) 00:11:31.461 9001.330 - 9053.969: 63.1896% ( 788) 00:11:31.461 9053.969 - 9106.609: 68.9261% ( 782) 00:11:31.461 9106.609 - 9159.248: 74.5012% ( 760) 00:11:31.461 9159.248 - 9211.888: 79.6362% ( 700) 00:11:31.461 9211.888 - 9264.527: 83.6781% ( 551) 00:11:31.461 9264.527 - 9317.166: 86.8691% ( 435) 00:11:31.461 9317.166 - 9369.806: 89.3853% ( 343) 00:11:31.461 9369.806 - 9422.445: 91.2339% ( 252) 00:11:31.461 9422.445 - 9475.084: 92.5176% ( 175) 00:11:31.461 9475.084 - 9527.724: 93.3759% ( 117) 00:11:31.461 9527.724 - 9580.363: 94.0141% ( 87) 00:11:31.461 9580.363 - 9633.002: 94.5202% ( 69) 00:11:31.461 9633.002 - 9685.642: 94.9384% ( 57) 00:11:31.461 9685.642 - 9738.281: 95.2318% ( 40) 00:11:31.461 9738.281 - 9790.920: 95.5326% ( 41) 00:11:31.461 9790.920 - 9843.560: 95.7820% ( 34) 00:11:31.461 9843.560 - 9896.199: 95.9507% ( 23) 00:11:31.461 9896.199 - 9948.839: 96.0974% ( 20) 00:11:31.461 9948.839 - 10001.478: 96.2075% ( 15) 00:11:31.461 10001.478 - 10054.117: 96.3322% ( 17) 00:11:31.461 10054.117 - 10106.757: 96.3982% ( 9) 00:11:31.461 10106.757 - 10159.396: 96.4862% ( 12) 00:11:31.461 10159.396 - 10212.035: 96.5742% ( 12) 00:11:31.461 10212.035 - 10264.675: 96.6403% ( 9) 00:11:31.461 10264.675 - 10317.314: 96.7063% ( 9) 00:11:31.461 10317.314 - 10369.953: 96.7650% ( 8) 00:11:31.461 10369.953 - 10422.593: 96.7943% ( 4) 00:11:31.461 10422.593 - 10475.232: 96.8383% ( 6) 00:11:31.461 10475.232 - 10527.871: 96.8677% ( 4) 00:11:31.461 10527.871 - 10580.511: 96.8970% ( 4) 00:11:31.461 10580.511 - 10633.150: 96.9117% ( 2) 00:11:31.461 10633.150 - 10685.790: 96.9777% ( 9) 00:11:31.461 10685.790 - 10738.429: 97.0364% ( 8) 00:11:31.461 10738.429 - 10791.068: 97.0731% ( 5) 00:11:31.461 10791.068 - 10843.708: 97.0951% ( 3) 00:11:31.461 10843.708 - 10896.347: 97.1244% ( 4) 00:11:31.461 10896.347 - 10948.986: 97.1538% ( 4) 00:11:31.461 10948.986 - 11001.626: 97.1904% ( 5) 00:11:31.461 11001.626 - 11054.265: 97.2344% ( 6) 00:11:31.461 11054.265 - 11106.904: 97.2638% ( 4) 00:11:31.461 11106.904 - 11159.544: 97.3151% ( 7) 00:11:31.461 11159.544 - 11212.183: 97.3518% ( 5) 00:11:31.461 11212.183 - 11264.822: 97.3958% ( 6) 00:11:31.461 11264.822 - 11317.462: 97.4325% ( 5) 00:11:31.461 11317.462 - 11370.101: 97.4545% ( 3) 00:11:31.461 11370.101 - 11422.741: 97.4839% ( 4) 00:11:31.461 11422.741 - 11475.380: 97.4985% ( 2) 00:11:31.461 11475.380 - 11528.019: 97.5205% ( 3) 00:11:31.461 11528.019 - 11580.659: 97.5352% ( 2) 00:11:31.461 11580.659 - 11633.298: 97.5572% ( 3) 00:11:31.461 11633.298 - 11685.937: 97.5719% ( 2) 00:11:31.461 11685.937 - 11738.577: 97.5939% ( 3) 00:11:31.461 11738.577 - 11791.216: 97.6086% ( 2) 00:11:31.461 11791.216 - 11843.855: 97.6232% ( 2) 00:11:31.461 11843.855 - 11896.495: 97.6379% ( 2) 00:11:31.461 11896.495 - 11949.134: 97.6526% ( 2) 00:11:31.461 12212.331 - 12264.970: 97.6673% ( 2) 00:11:31.461 12264.970 - 12317.610: 97.6966% ( 4) 00:11:31.461 12317.610 - 12370.249: 97.7113% ( 2) 00:11:31.461 12370.249 - 12422.888: 97.7406% ( 4) 00:11:31.461 12422.888 - 12475.528: 97.7626% ( 3) 00:11:31.461 12475.528 - 12528.167: 97.7773% ( 2) 00:11:31.461 12528.167 - 12580.806: 97.7920% ( 2) 00:11:31.461 12580.806 - 12633.446: 97.8140% ( 3) 00:11:31.461 12633.446 - 12686.085: 97.8360% ( 3) 00:11:31.461 12686.085 - 12738.724: 97.8580% ( 3) 00:11:31.461 12738.724 - 12791.364: 97.8800% ( 3) 00:11:31.461 12791.364 - 12844.003: 97.9020% ( 3) 00:11:31.461 12844.003 - 12896.643: 97.9240% ( 3) 00:11:31.461 12896.643 - 12949.282: 97.9387% ( 2) 00:11:31.461 12949.282 - 13001.921: 97.9607% ( 3) 00:11:31.461 13001.921 - 13054.561: 97.9827% ( 3) 00:11:31.461 13054.561 - 13107.200: 98.0047% ( 3) 00:11:31.461 13107.200 - 13159.839: 98.0267% ( 3) 00:11:31.461 13159.839 - 13212.479: 98.0487% ( 3) 00:11:31.461 13212.479 - 13265.118: 98.0707% ( 3) 00:11:31.461 13265.118 - 13317.757: 98.0927% ( 3) 00:11:31.461 13317.757 - 13370.397: 98.1074% ( 2) 00:11:31.461 13370.397 - 13423.036: 98.1221% ( 2) 00:11:31.461 15370.692 - 15475.971: 98.1294% ( 1) 00:11:31.461 15475.971 - 15581.250: 98.1734% ( 6) 00:11:31.461 15581.250 - 15686.529: 98.2248% ( 7) 00:11:31.461 15686.529 - 15791.807: 98.2761% ( 7) 00:11:31.461 15791.807 - 15897.086: 98.3128% ( 5) 00:11:31.461 15897.086 - 16002.365: 98.3568% ( 6) 00:11:31.461 16002.365 - 16107.643: 98.4008% ( 6) 00:11:31.461 16107.643 - 16212.922: 98.4448% ( 6) 00:11:31.461 16212.922 - 16318.201: 98.4888% ( 6) 00:11:31.461 16318.201 - 16423.480: 98.5402% ( 7) 00:11:31.461 16423.480 - 16528.758: 98.5842% ( 6) 00:11:31.461 16528.758 - 16634.037: 98.5915% ( 1) 00:11:31.461 19687.120 - 19792.398: 98.6209% ( 4) 00:11:31.461 19792.398 - 19897.677: 98.6796% ( 8) 00:11:31.461 19897.677 - 20002.956: 98.7309% ( 7) 00:11:31.461 20002.956 - 20108.235: 98.7823% ( 7) 00:11:31.461 20108.235 - 20213.513: 98.8410% ( 8) 00:11:31.461 20213.513 - 20318.792: 98.8923% ( 7) 00:11:31.461 20318.792 - 20424.071: 98.9437% ( 7) 00:11:31.461 20424.071 - 20529.349: 98.9950% ( 7) 00:11:31.461 20529.349 - 20634.628: 99.0464% ( 7) 00:11:31.461 20634.628 - 20739.907: 99.0610% ( 2) 00:11:31.461 37479.222 - 37689.780: 99.1050% ( 6) 00:11:31.461 37689.780 - 37900.337: 99.1564% ( 7) 00:11:31.461 37900.337 - 38110.895: 99.2151% ( 8) 00:11:31.461 38110.895 - 38321.452: 99.2738% ( 8) 00:11:31.461 38321.452 - 38532.010: 99.3251% ( 7) 00:11:31.461 38532.010 - 38742.567: 99.3765% ( 7) 00:11:31.461 38742.567 - 38953.124: 99.4278% ( 7) 00:11:31.461 38953.124 - 39163.682: 99.4792% ( 7) 00:11:31.461 39163.682 - 39374.239: 99.5305% ( 7) 00:11:31.461 50323.226 - 50533.783: 99.5819% ( 7) 00:11:31.461 50533.783 - 50744.341: 99.6406% ( 8) 00:11:31.461 50744.341 - 50954.898: 99.6919% ( 7) 00:11:31.461 50954.898 - 51165.455: 99.7433% ( 7) 00:11:31.461 51165.455 - 51376.013: 99.7946% ( 7) 00:11:31.461 51376.013 - 51586.570: 99.8533% ( 8) 00:11:31.461 51586.570 - 51797.128: 99.9120% ( 8) 00:11:31.461 51797.128 - 52007.685: 99.9633% ( 7) 00:11:31.461 52007.685 - 52218.243: 100.0000% ( 5) 00:11:31.461 00:11:31.461 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:31.461 ============================================================================== 00:11:31.461 Range in us Cumulative IO count 00:11:31.461 8106.461 - 8159.100: 0.0293% ( 4) 00:11:31.461 8159.100 - 8211.740: 0.0587% ( 4) 00:11:31.461 8211.740 - 8264.379: 0.1467% ( 12) 00:11:31.461 8264.379 - 8317.018: 0.4035% ( 35) 00:11:31.461 8317.018 - 8369.658: 1.2397% ( 114) 00:11:31.461 8369.658 - 8422.297: 3.1250% ( 257) 00:11:31.462 8422.297 - 8474.937: 5.7952% ( 364) 00:11:31.462 8474.937 - 8527.576: 9.4850% ( 503) 00:11:31.462 8527.576 - 8580.215: 13.7251% ( 578) 00:11:31.462 8580.215 - 8632.855: 18.5593% ( 659) 00:11:31.462 8632.855 - 8685.494: 24.0684% ( 751) 00:11:31.462 8685.494 - 8738.133: 29.4087% ( 728) 00:11:31.462 8738.133 - 8790.773: 34.8665% ( 744) 00:11:31.462 8790.773 - 8843.412: 40.3462% ( 747) 00:11:31.462 8843.412 - 8896.051: 46.0681% ( 780) 00:11:31.462 8896.051 - 8948.691: 51.6505% ( 761) 00:11:31.462 8948.691 - 9001.330: 57.4604% ( 792) 00:11:31.462 9001.330 - 9053.969: 63.1896% ( 781) 00:11:31.462 9053.969 - 9106.609: 68.8380% ( 770) 00:11:31.462 9106.609 - 9159.248: 74.2958% ( 744) 00:11:31.462 9159.248 - 9211.888: 79.3647% ( 691) 00:11:31.462 9211.888 - 9264.527: 83.4360% ( 555) 00:11:31.462 9264.527 - 9317.166: 86.7884% ( 457) 00:11:31.462 9317.166 - 9369.806: 89.2679% ( 338) 00:11:31.462 9369.806 - 9422.445: 91.1532% ( 257) 00:11:31.462 9422.445 - 9475.084: 92.5176% ( 186) 00:11:31.462 9475.084 - 9527.724: 93.4126% ( 122) 00:11:31.462 9527.724 - 9580.363: 93.9554% ( 74) 00:11:31.462 9580.363 - 9633.002: 94.4249% ( 64) 00:11:31.462 9633.002 - 9685.642: 94.8283% ( 55) 00:11:31.462 9685.642 - 9738.281: 95.1218% ( 40) 00:11:31.462 9738.281 - 9790.920: 95.4225% ( 41) 00:11:31.462 9790.920 - 9843.560: 95.6793% ( 35) 00:11:31.462 9843.560 - 9896.199: 95.8773% ( 27) 00:11:31.462 9896.199 - 9948.839: 96.0607% ( 25) 00:11:31.462 9948.839 - 10001.478: 96.1781% ( 16) 00:11:31.462 10001.478 - 10054.117: 96.3102% ( 18) 00:11:31.462 10054.117 - 10106.757: 96.3982% ( 12) 00:11:31.462 10106.757 - 10159.396: 96.4642% ( 9) 00:11:31.462 10159.396 - 10212.035: 96.5522% ( 12) 00:11:31.462 10212.035 - 10264.675: 96.6036% ( 7) 00:11:31.462 10264.675 - 10317.314: 96.6623% ( 8) 00:11:31.462 10317.314 - 10369.953: 96.7210% ( 8) 00:11:31.462 10369.953 - 10422.593: 96.7723% ( 7) 00:11:31.462 10422.593 - 10475.232: 96.8163% ( 6) 00:11:31.462 10475.232 - 10527.871: 96.8457% ( 4) 00:11:31.462 10527.871 - 10580.511: 96.8897% ( 6) 00:11:31.462 10580.511 - 10633.150: 96.9263% ( 5) 00:11:31.462 10633.150 - 10685.790: 96.9630% ( 5) 00:11:31.462 10685.790 - 10738.429: 96.9850% ( 3) 00:11:31.462 10738.429 - 10791.068: 96.9997% ( 2) 00:11:31.462 10791.068 - 10843.708: 97.0217% ( 3) 00:11:31.462 10843.708 - 10896.347: 97.0364% ( 2) 00:11:31.462 10896.347 - 10948.986: 97.0584% ( 3) 00:11:31.462 10948.986 - 11001.626: 97.0951% ( 5) 00:11:31.462 11001.626 - 11054.265: 97.1611% ( 9) 00:11:31.462 11054.265 - 11106.904: 97.2124% ( 7) 00:11:31.462 11106.904 - 11159.544: 97.2491% ( 5) 00:11:31.462 11159.544 - 11212.183: 97.2638% ( 2) 00:11:31.462 11212.183 - 11264.822: 97.3005% ( 5) 00:11:31.462 11264.822 - 11317.462: 97.3225% ( 3) 00:11:31.462 11317.462 - 11370.101: 97.3371% ( 2) 00:11:31.462 11370.101 - 11422.741: 97.3592% ( 3) 00:11:31.462 11422.741 - 11475.380: 97.3812% ( 3) 00:11:31.462 11475.380 - 11528.019: 97.3958% ( 2) 00:11:31.462 11528.019 - 11580.659: 97.4178% ( 3) 00:11:31.462 11580.659 - 11633.298: 97.4325% ( 2) 00:11:31.462 11633.298 - 11685.937: 97.4545% ( 3) 00:11:31.462 11685.937 - 11738.577: 97.4692% ( 2) 00:11:31.462 11738.577 - 11791.216: 97.4839% ( 2) 00:11:31.462 11791.216 - 11843.855: 97.5059% ( 3) 00:11:31.462 11843.855 - 11896.495: 97.5205% ( 2) 00:11:31.462 11896.495 - 11949.134: 97.5572% ( 5) 00:11:31.462 11949.134 - 12001.773: 97.5866% ( 4) 00:11:31.462 12001.773 - 12054.413: 97.6232% ( 5) 00:11:31.462 12054.413 - 12107.052: 97.6746% ( 7) 00:11:31.462 12107.052 - 12159.692: 97.7113% ( 5) 00:11:31.462 12159.692 - 12212.331: 97.7406% ( 4) 00:11:31.462 12212.331 - 12264.970: 97.7846% ( 6) 00:11:31.462 12264.970 - 12317.610: 97.8140% ( 4) 00:11:31.462 12317.610 - 12370.249: 97.8360% ( 3) 00:11:31.462 12370.249 - 12422.888: 97.8580% ( 3) 00:11:31.462 12422.888 - 12475.528: 97.8800% ( 3) 00:11:31.462 12475.528 - 12528.167: 97.9020% ( 3) 00:11:31.462 12528.167 - 12580.806: 97.9240% ( 3) 00:11:31.462 12580.806 - 12633.446: 97.9387% ( 2) 00:11:31.462 12633.446 - 12686.085: 97.9607% ( 3) 00:11:31.462 12686.085 - 12738.724: 97.9827% ( 3) 00:11:31.462 12738.724 - 12791.364: 98.0047% ( 3) 00:11:31.462 12791.364 - 12844.003: 98.0267% ( 3) 00:11:31.462 12844.003 - 12896.643: 98.0487% ( 3) 00:11:31.462 12896.643 - 12949.282: 98.0707% ( 3) 00:11:31.462 12949.282 - 13001.921: 98.0927% ( 3) 00:11:31.462 13001.921 - 13054.561: 98.1147% ( 3) 00:11:31.462 13054.561 - 13107.200: 98.1221% ( 1) 00:11:31.462 14633.741 - 14739.020: 98.1587% ( 5) 00:11:31.462 14739.020 - 14844.299: 98.1954% ( 5) 00:11:31.462 14844.299 - 14949.578: 98.2468% ( 7) 00:11:31.462 14949.578 - 15054.856: 98.2835% ( 5) 00:11:31.462 15054.856 - 15160.135: 98.3275% ( 6) 00:11:31.462 15160.135 - 15265.414: 98.3715% ( 6) 00:11:31.462 15265.414 - 15370.692: 98.4082% ( 5) 00:11:31.462 15370.692 - 15475.971: 98.4448% ( 5) 00:11:31.462 15475.971 - 15581.250: 98.4888% ( 6) 00:11:31.462 15581.250 - 15686.529: 98.5255% ( 5) 00:11:31.462 15686.529 - 15791.807: 98.5695% ( 6) 00:11:31.462 15791.807 - 15897.086: 98.5915% ( 3) 00:11:31.462 18529.054 - 18634.333: 98.6209% ( 4) 00:11:31.462 18634.333 - 18739.611: 98.6796% ( 8) 00:11:31.462 18739.611 - 18844.890: 98.7383% ( 8) 00:11:31.462 18844.890 - 18950.169: 98.7896% ( 7) 00:11:31.462 18950.169 - 19055.447: 98.8336% ( 6) 00:11:31.462 19055.447 - 19160.726: 98.8850% ( 7) 00:11:31.462 19160.726 - 19266.005: 98.9363% ( 7) 00:11:31.462 19266.005 - 19371.284: 98.9950% ( 8) 00:11:31.462 19371.284 - 19476.562: 99.0464% ( 7) 00:11:31.462 19476.562 - 19581.841: 99.0610% ( 2) 00:11:31.462 36847.550 - 37058.108: 99.0977% ( 5) 00:11:31.462 37058.108 - 37268.665: 99.1564% ( 8) 00:11:31.462 37268.665 - 37479.222: 99.2151% ( 8) 00:11:31.462 37479.222 - 37689.780: 99.2664% ( 7) 00:11:31.462 37689.780 - 37900.337: 99.3178% ( 7) 00:11:31.462 37900.337 - 38110.895: 99.3691% ( 7) 00:11:31.462 38110.895 - 38321.452: 99.4278% ( 8) 00:11:31.462 38321.452 - 38532.010: 99.4865% ( 8) 00:11:31.462 38532.010 - 38742.567: 99.5305% ( 6) 00:11:31.462 48849.324 - 49059.881: 99.5525% ( 3) 00:11:31.462 49059.881 - 49270.439: 99.5965% ( 6) 00:11:31.462 49270.439 - 49480.996: 99.6406% ( 6) 00:11:31.462 49480.996 - 49691.553: 99.6846% ( 6) 00:11:31.462 49691.553 - 49902.111: 99.7359% ( 7) 00:11:31.462 49902.111 - 50112.668: 99.7946% ( 8) 00:11:31.462 50112.668 - 50323.226: 99.8460% ( 7) 00:11:31.462 50323.226 - 50533.783: 99.8973% ( 7) 00:11:31.462 50533.783 - 50744.341: 99.9487% ( 7) 00:11:31.462 50744.341 - 50954.898: 100.0000% ( 7) 00:11:31.462 00:11:31.462 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:31.462 ============================================================================== 00:11:31.462 Range in us Cumulative IO count 00:11:31.462 8159.100 - 8211.740: 0.0657% ( 9) 00:11:31.462 8211.740 - 8264.379: 0.1168% ( 7) 00:11:31.462 8264.379 - 8317.018: 0.3724% ( 35) 00:11:31.462 8317.018 - 8369.658: 1.1901% ( 112) 00:11:31.462 8369.658 - 8422.297: 3.0447% ( 254) 00:11:31.462 8422.297 - 8474.937: 6.0675% ( 414) 00:11:31.462 8474.937 - 8527.576: 9.4115% ( 458) 00:11:31.462 8527.576 - 8580.215: 13.7193% ( 590) 00:11:31.462 8580.215 - 8632.855: 18.5456% ( 661) 00:11:31.462 8632.855 - 8685.494: 23.6857% ( 704) 00:11:31.462 8685.494 - 8738.133: 29.1472% ( 748) 00:11:31.462 8738.133 - 8790.773: 34.5356% ( 738) 00:11:31.462 8790.773 - 8843.412: 40.0409% ( 754) 00:11:31.462 8843.412 - 8896.051: 45.7433% ( 781) 00:11:31.462 8896.051 - 8948.691: 51.3216% ( 764) 00:11:31.462 8948.691 - 9001.330: 57.1116% ( 793) 00:11:31.462 9001.330 - 9053.969: 62.8724% ( 789) 00:11:31.462 9053.969 - 9106.609: 68.6040% ( 785) 00:11:31.462 9106.609 - 9159.248: 74.0800% ( 750) 00:11:31.462 9159.248 - 9211.888: 79.1253% ( 691) 00:11:31.462 9211.888 - 9264.527: 83.3163% ( 574) 00:11:31.462 9264.527 - 9317.166: 86.5800% ( 447) 00:11:31.462 9317.166 - 9369.806: 89.0333% ( 336) 00:11:31.462 9369.806 - 9422.445: 90.8879% ( 254) 00:11:31.462 9422.445 - 9475.084: 92.1510% ( 173) 00:11:31.462 9475.084 - 9527.724: 92.9761% ( 113) 00:11:31.462 9527.724 - 9580.363: 93.5237% ( 75) 00:11:31.462 9580.363 - 9633.002: 93.9909% ( 64) 00:11:31.462 9633.002 - 9685.642: 94.4071% ( 57) 00:11:31.462 9685.642 - 9738.281: 94.7284% ( 44) 00:11:31.462 9738.281 - 9790.920: 95.0204% ( 40) 00:11:31.462 9790.920 - 9843.560: 95.2833% ( 36) 00:11:31.462 9843.560 - 9896.199: 95.4658% ( 25) 00:11:31.462 9896.199 - 9948.839: 95.6411% ( 24) 00:11:31.462 9948.839 - 10001.478: 95.8090% ( 23) 00:11:31.462 10001.478 - 10054.117: 95.9185% ( 15) 00:11:31.462 10054.117 - 10106.757: 95.9842% ( 9) 00:11:31.462 10106.757 - 10159.396: 96.0499% ( 9) 00:11:31.462 10159.396 - 10212.035: 96.0938% ( 6) 00:11:31.462 10212.035 - 10264.675: 96.1376% ( 6) 00:11:31.462 10264.675 - 10317.314: 96.1668% ( 4) 00:11:31.462 10317.314 - 10369.953: 96.2106% ( 6) 00:11:31.462 10369.953 - 10422.593: 96.2398% ( 4) 00:11:31.462 10422.593 - 10475.232: 96.2836% ( 6) 00:11:31.462 10475.232 - 10527.871: 96.3128% ( 4) 00:11:31.462 10527.871 - 10580.511: 96.3566% ( 6) 00:11:31.462 10580.511 - 10633.150: 96.3931% ( 5) 00:11:31.462 10633.150 - 10685.790: 96.4296% ( 5) 00:11:31.462 10685.790 - 10738.429: 96.4734% ( 6) 00:11:31.462 10738.429 - 10791.068: 96.5026% ( 4) 00:11:31.462 10791.068 - 10843.708: 96.5391% ( 5) 00:11:31.462 10843.708 - 10896.347: 96.5829% ( 6) 00:11:31.462 10896.347 - 10948.986: 96.6121% ( 4) 00:11:31.462 10948.986 - 11001.626: 96.6487% ( 5) 00:11:31.462 11001.626 - 11054.265: 96.6852% ( 5) 00:11:31.462 11054.265 - 11106.904: 96.6998% ( 2) 00:11:31.462 11106.904 - 11159.544: 96.7217% ( 3) 00:11:31.462 11159.544 - 11212.183: 96.7290% ( 1) 00:11:31.462 11264.822 - 11317.462: 96.7509% ( 3) 00:11:31.462 11317.462 - 11370.101: 96.7874% ( 5) 00:11:31.462 11370.101 - 11422.741: 96.8166% ( 4) 00:11:31.462 11422.741 - 11475.380: 96.8385% ( 3) 00:11:31.462 11528.019 - 11580.659: 96.8531% ( 2) 00:11:31.462 11580.659 - 11633.298: 96.8677% ( 2) 00:11:31.462 11633.298 - 11685.937: 96.9115% ( 6) 00:11:31.463 11685.937 - 11738.577: 96.9553% ( 6) 00:11:31.463 11738.577 - 11791.216: 97.0064% ( 7) 00:11:31.463 11791.216 - 11843.855: 97.0356% ( 4) 00:11:31.463 11843.855 - 11896.495: 97.0721% ( 5) 00:11:31.463 11896.495 - 11949.134: 97.1086% ( 5) 00:11:31.463 11949.134 - 12001.773: 97.1525% ( 6) 00:11:31.463 12001.773 - 12054.413: 97.1890% ( 5) 00:11:31.463 12054.413 - 12107.052: 97.2255% ( 5) 00:11:31.463 12107.052 - 12159.692: 97.2693% ( 6) 00:11:31.463 12159.692 - 12212.331: 97.3058% ( 5) 00:11:31.463 12212.331 - 12264.970: 97.3423% ( 5) 00:11:31.463 12264.970 - 12317.610: 97.3861% ( 6) 00:11:31.463 12317.610 - 12370.249: 97.4299% ( 6) 00:11:31.463 12370.249 - 12422.888: 97.4664% ( 5) 00:11:31.463 12422.888 - 12475.528: 97.5102% ( 6) 00:11:31.463 12475.528 - 12528.167: 97.5467% ( 5) 00:11:31.463 12528.167 - 12580.806: 97.5905% ( 6) 00:11:31.463 12580.806 - 12633.446: 97.5978% ( 1) 00:11:31.463 12633.446 - 12686.085: 97.6197% ( 3) 00:11:31.463 12686.085 - 12738.724: 97.6416% ( 3) 00:11:31.463 12738.724 - 12791.364: 97.6636% ( 3) 00:11:31.463 13791.512 - 13896.790: 97.7001% ( 5) 00:11:31.463 13896.790 - 14002.069: 97.7439% ( 6) 00:11:31.463 14002.069 - 14107.348: 97.7804% ( 5) 00:11:31.463 14107.348 - 14212.627: 97.8242% ( 6) 00:11:31.463 14212.627 - 14317.905: 97.8680% ( 6) 00:11:31.463 14317.905 - 14423.184: 97.9118% ( 6) 00:11:31.463 14423.184 - 14528.463: 97.9483% ( 5) 00:11:31.463 14528.463 - 14633.741: 97.9921% ( 6) 00:11:31.463 14633.741 - 14739.020: 98.0359% ( 6) 00:11:31.463 14739.020 - 14844.299: 98.0797% ( 6) 00:11:31.463 14844.299 - 14949.578: 98.1162% ( 5) 00:11:31.463 14949.578 - 15054.856: 98.1308% ( 2) 00:11:31.463 17897.382 - 18002.660: 98.1600% ( 4) 00:11:31.463 18002.660 - 18107.939: 98.2112% ( 7) 00:11:31.463 18107.939 - 18213.218: 98.3280% ( 16) 00:11:31.463 18213.218 - 18318.496: 98.4229% ( 13) 00:11:31.463 18318.496 - 18423.775: 98.5324% ( 15) 00:11:31.463 18423.775 - 18529.054: 98.6419% ( 15) 00:11:31.463 18529.054 - 18634.333: 98.7515% ( 15) 00:11:31.463 18634.333 - 18739.611: 98.8610% ( 15) 00:11:31.463 18739.611 - 18844.890: 98.9705% ( 15) 00:11:31.463 18844.890 - 18950.169: 99.0216% ( 7) 00:11:31.463 18950.169 - 19055.447: 99.0654% ( 6) 00:11:31.463 28004.138 - 28214.696: 99.1019% ( 5) 00:11:31.463 28214.696 - 28425.253: 99.1603% ( 8) 00:11:31.463 28425.253 - 28635.810: 99.2114% ( 7) 00:11:31.463 28635.810 - 28846.368: 99.2699% ( 8) 00:11:31.463 28846.368 - 29056.925: 99.3210% ( 7) 00:11:31.463 29056.925 - 29267.483: 99.3721% ( 7) 00:11:31.463 29267.483 - 29478.040: 99.4305% ( 8) 00:11:31.463 29478.040 - 29688.598: 99.4816% ( 7) 00:11:31.463 29688.598 - 29899.155: 99.5327% ( 7) 00:11:31.463 37058.108 - 37268.665: 99.5546% ( 3) 00:11:31.463 37268.665 - 37479.222: 99.6057% ( 7) 00:11:31.463 37479.222 - 37689.780: 99.6568% ( 7) 00:11:31.463 37689.780 - 37900.337: 99.7152% ( 8) 00:11:31.463 37900.337 - 38110.895: 99.7737% ( 8) 00:11:31.463 38110.895 - 38321.452: 99.8321% ( 8) 00:11:31.463 38321.452 - 38532.010: 99.8905% ( 8) 00:11:31.463 38532.010 - 38742.567: 99.9489% ( 8) 00:11:31.463 38742.567 - 38953.124: 100.0000% ( 7) 00:11:31.463 00:11:31.463 12:21:54 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:32.843 Initializing NVMe Controllers 00:11:32.843 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:32.843 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:32.843 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:32.843 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:32.843 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:32.843 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:32.843 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:32.843 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:32.843 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:32.843 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:32.843 Initialization complete. Launching workers. 00:11:32.843 ======================================================== 00:11:32.843 Latency(us) 00:11:32.843 Device Information : IOPS MiB/s Average min max 00:11:32.843 PCIE (0000:00:10.0) NSID 1 from core 0: 12883.50 150.98 9958.72 7262.28 42888.31 00:11:32.843 PCIE (0000:00:11.0) NSID 1 from core 0: 12883.50 150.98 9943.93 7744.31 40862.65 00:11:32.843 PCIE (0000:00:13.0) NSID 1 from core 0: 12883.50 150.98 9928.64 7538.69 40183.72 00:11:32.843 PCIE (0000:00:12.0) NSID 1 from core 0: 12883.50 150.98 9913.73 7621.73 38147.47 00:11:32.843 PCIE (0000:00:12.0) NSID 2 from core 0: 12883.50 150.98 9898.48 7653.54 36658.93 00:11:32.843 PCIE (0000:00:12.0) NSID 3 from core 0: 12947.28 151.73 9834.60 7484.71 28748.39 00:11:32.843 ======================================================== 00:11:32.843 Total : 77364.78 906.62 9912.95 7262.28 42888.31 00:11:32.843 00:11:32.843 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:32.843 ================================================================================= 00:11:32.843 1.00000% : 8159.100us 00:11:32.843 10.00000% : 8738.133us 00:11:32.843 25.00000% : 9106.609us 00:11:32.843 50.00000% : 9422.445us 00:11:32.843 75.00000% : 9790.920us 00:11:32.843 90.00000% : 10264.675us 00:11:32.843 95.00000% : 13054.561us 00:11:32.843 98.00000% : 19266.005us 00:11:32.843 99.00000% : 21687.415us 00:11:32.843 99.50000% : 34320.861us 00:11:32.843 99.90000% : 42532.601us 00:11:32.843 99.99000% : 42953.716us 00:11:32.843 99.99900% : 42953.716us 00:11:32.843 99.99990% : 42953.716us 00:11:32.843 99.99999% : 42953.716us 00:11:32.843 00:11:32.843 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:32.843 ================================================================================= 00:11:32.843 1.00000% : 8211.740us 00:11:32.843 10.00000% : 8790.773us 00:11:32.843 25.00000% : 9106.609us 00:11:32.843 50.00000% : 9422.445us 00:11:32.843 75.00000% : 9738.281us 00:11:32.843 90.00000% : 10212.035us 00:11:32.843 95.00000% : 12580.806us 00:11:32.843 98.00000% : 19160.726us 00:11:32.843 99.00000% : 21476.858us 00:11:32.843 99.50000% : 32636.402us 00:11:32.843 99.90000% : 40637.584us 00:11:32.843 99.99000% : 40848.141us 00:11:32.843 99.99900% : 41058.699us 00:11:32.843 99.99990% : 41058.699us 00:11:32.843 99.99999% : 41058.699us 00:11:32.843 00:11:32.843 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:32.843 ================================================================================= 00:11:32.843 1.00000% : 8001.182us 00:11:32.843 10.00000% : 8790.773us 00:11:32.843 25.00000% : 9106.609us 00:11:32.843 50.00000% : 9422.445us 00:11:32.843 75.00000% : 9790.920us 00:11:32.843 90.00000% : 10317.314us 00:11:32.843 95.00000% : 12212.331us 00:11:32.843 98.00000% : 18739.611us 00:11:32.843 99.00000% : 21582.137us 00:11:32.843 99.50000% : 32425.844us 00:11:32.844 99.90000% : 40005.912us 00:11:32.844 99.99000% : 40216.469us 00:11:32.844 99.99900% : 40216.469us 00:11:32.844 99.99990% : 40216.469us 00:11:32.844 99.99999% : 40216.469us 00:11:32.844 00:11:32.844 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:32.844 ================================================================================= 00:11:32.844 1.00000% : 8159.100us 00:11:32.844 10.00000% : 8843.412us 00:11:32.844 25.00000% : 9159.248us 00:11:32.844 50.00000% : 9422.445us 00:11:32.844 75.00000% : 9738.281us 00:11:32.844 90.00000% : 10212.035us 00:11:32.844 95.00000% : 12212.331us 00:11:32.844 98.00000% : 18213.218us 00:11:32.844 99.00000% : 21371.579us 00:11:32.844 99.50000% : 30530.827us 00:11:32.844 99.90000% : 37900.337us 00:11:32.844 99.99000% : 38321.452us 00:11:32.844 99.99900% : 38321.452us 00:11:32.844 99.99990% : 38321.452us 00:11:32.844 99.99999% : 38321.452us 00:11:32.844 00:11:32.844 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:32.844 ================================================================================= 00:11:32.844 1.00000% : 8106.461us 00:11:32.844 10.00000% : 8843.412us 00:11:32.844 25.00000% : 9106.609us 00:11:32.844 50.00000% : 9422.445us 00:11:32.844 75.00000% : 9738.281us 00:11:32.844 90.00000% : 10159.396us 00:11:32.844 95.00000% : 12370.249us 00:11:32.844 98.00000% : 18318.496us 00:11:32.844 99.00000% : 21476.858us 00:11:32.844 99.50000% : 29267.483us 00:11:32.844 99.90000% : 36426.435us 00:11:32.844 99.99000% : 36636.993us 00:11:32.844 99.99900% : 36847.550us 00:11:32.844 99.99990% : 36847.550us 00:11:32.844 99.99999% : 36847.550us 00:11:32.844 00:11:32.844 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:32.844 ================================================================================= 00:11:32.844 1.00000% : 8053.822us 00:11:32.844 10.00000% : 8843.412us 00:11:32.844 25.00000% : 9106.609us 00:11:32.844 50.00000% : 9422.445us 00:11:32.844 75.00000% : 9790.920us 00:11:32.844 90.00000% : 10264.675us 00:11:32.844 95.00000% : 12896.643us 00:11:32.844 98.00000% : 19160.726us 00:11:32.844 99.00000% : 20318.792us 00:11:32.844 99.50000% : 21371.579us 00:11:32.844 99.90000% : 28635.810us 00:11:32.844 99.99000% : 28846.368us 00:11:32.844 99.99900% : 28846.368us 00:11:32.844 99.99990% : 28846.368us 00:11:32.844 99.99999% : 28846.368us 00:11:32.844 00:11:32.844 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:32.844 ============================================================================== 00:11:32.844 Range in us Cumulative IO count 00:11:32.844 7211.592 - 7264.231: 0.0077% ( 1) 00:11:32.844 7264.231 - 7316.871: 0.0232% ( 2) 00:11:32.844 7316.871 - 7369.510: 0.0619% ( 5) 00:11:32.844 7369.510 - 7422.149: 0.0928% ( 4) 00:11:32.844 7422.149 - 7474.789: 0.1083% ( 2) 00:11:32.844 7474.789 - 7527.428: 0.1160% ( 1) 00:11:32.844 7527.428 - 7580.067: 0.1238% ( 1) 00:11:32.844 7580.067 - 7632.707: 0.1470% ( 3) 00:11:32.844 7632.707 - 7685.346: 0.1702% ( 3) 00:11:32.844 7685.346 - 7737.986: 0.2011% ( 4) 00:11:32.844 7737.986 - 7790.625: 0.2398% ( 5) 00:11:32.844 7790.625 - 7843.264: 0.2707% ( 4) 00:11:32.844 7843.264 - 7895.904: 0.3481% ( 10) 00:11:32.844 7895.904 - 7948.543: 0.4564% ( 14) 00:11:32.844 7948.543 - 8001.182: 0.5879% ( 17) 00:11:32.844 8001.182 - 8053.822: 0.7658% ( 23) 00:11:32.844 8053.822 - 8106.461: 0.8818% ( 15) 00:11:32.844 8106.461 - 8159.100: 1.0752% ( 25) 00:11:32.844 8159.100 - 8211.740: 1.2763% ( 26) 00:11:32.844 8211.740 - 8264.379: 1.4851% ( 27) 00:11:32.844 8264.379 - 8317.018: 1.9493% ( 60) 00:11:32.844 8317.018 - 8369.658: 2.4211% ( 61) 00:11:32.844 8369.658 - 8422.297: 2.9548% ( 69) 00:11:32.844 8422.297 - 8474.937: 3.5504% ( 77) 00:11:32.844 8474.937 - 8527.576: 4.1383% ( 76) 00:11:32.844 8527.576 - 8580.215: 5.0278% ( 115) 00:11:32.844 8580.215 - 8632.855: 6.4202% ( 180) 00:11:32.844 8632.855 - 8685.494: 8.0291% ( 208) 00:11:32.844 8685.494 - 8738.133: 10.0944% ( 267) 00:11:32.844 8738.133 - 8790.773: 11.8735% ( 230) 00:11:32.844 8790.773 - 8843.412: 13.8227% ( 252) 00:11:32.844 8843.412 - 8896.051: 16.0504% ( 288) 00:11:32.844 8896.051 - 8948.691: 18.2472% ( 284) 00:11:32.844 8948.691 - 9001.330: 20.9158% ( 345) 00:11:32.844 9001.330 - 9053.969: 23.9171% ( 388) 00:11:32.844 9053.969 - 9106.609: 27.1658% ( 420) 00:11:32.844 9106.609 - 9159.248: 30.7782% ( 467) 00:11:32.844 9159.248 - 9211.888: 34.6612% ( 502) 00:11:32.844 9211.888 - 9264.527: 38.2812% ( 468) 00:11:32.844 9264.527 - 9317.166: 42.3113% ( 521) 00:11:32.844 9317.166 - 9369.806: 46.7899% ( 579) 00:11:32.844 9369.806 - 9422.445: 51.7017% ( 635) 00:11:32.844 9422.445 - 9475.084: 56.2113% ( 583) 00:11:32.844 9475.084 - 9527.724: 60.5043% ( 555) 00:11:32.844 9527.724 - 9580.363: 64.7200% ( 545) 00:11:32.844 9580.363 - 9633.002: 68.0074% ( 425) 00:11:32.844 9633.002 - 9685.642: 71.0087% ( 388) 00:11:32.844 9685.642 - 9738.281: 74.1337% ( 404) 00:11:32.844 9738.281 - 9790.920: 76.9261% ( 361) 00:11:32.844 9790.920 - 9843.560: 79.4013% ( 320) 00:11:32.844 9843.560 - 9896.199: 81.8069% ( 311) 00:11:32.844 9896.199 - 9948.839: 83.7794% ( 255) 00:11:32.844 9948.839 - 10001.478: 85.4889% ( 221) 00:11:32.844 10001.478 - 10054.117: 87.1055% ( 209) 00:11:32.844 10054.117 - 10106.757: 88.2194% ( 144) 00:11:32.844 10106.757 - 10159.396: 88.9465% ( 94) 00:11:32.844 10159.396 - 10212.035: 89.7277% ( 101) 00:11:32.844 10212.035 - 10264.675: 90.2769% ( 71) 00:11:32.844 10264.675 - 10317.314: 90.6946% ( 54) 00:11:32.844 10317.314 - 10369.953: 91.1355% ( 57) 00:11:32.844 10369.953 - 10422.593: 91.4991% ( 47) 00:11:32.844 10422.593 - 10475.232: 91.8317% ( 43) 00:11:32.844 10475.232 - 10527.871: 92.0328% ( 26) 00:11:32.844 10527.871 - 10580.511: 92.2030% ( 22) 00:11:32.844 10580.511 - 10633.150: 92.4428% ( 31) 00:11:32.844 10633.150 - 10685.790: 92.5897% ( 19) 00:11:32.844 10685.790 - 10738.429: 92.6903% ( 13) 00:11:32.844 10738.429 - 10791.068: 92.7676% ( 10) 00:11:32.844 10791.068 - 10843.708: 92.8373% ( 9) 00:11:32.844 10843.708 - 10896.347: 92.9069% ( 9) 00:11:32.844 10896.347 - 10948.986: 92.9997% ( 12) 00:11:32.844 10948.986 - 11001.626: 93.0461% ( 6) 00:11:32.844 11001.626 - 11054.265: 93.0925% ( 6) 00:11:32.844 11054.265 - 11106.904: 93.1544% ( 8) 00:11:32.844 11106.904 - 11159.544: 93.2163% ( 8) 00:11:32.844 11159.544 - 11212.183: 93.2550% ( 5) 00:11:32.844 11212.183 - 11264.822: 93.3014% ( 6) 00:11:32.844 11264.822 - 11317.462: 93.3942% ( 12) 00:11:32.844 11317.462 - 11370.101: 93.6185% ( 29) 00:11:32.844 11370.101 - 11422.741: 93.7036% ( 11) 00:11:32.844 11422.741 - 11475.380: 93.9124% ( 27) 00:11:32.844 11475.380 - 11528.019: 93.9279% ( 2) 00:11:32.844 11528.019 - 11580.659: 93.9666% ( 5) 00:11:32.844 11580.659 - 11633.298: 94.0207% ( 7) 00:11:32.844 11633.298 - 11685.937: 94.1368% ( 15) 00:11:32.844 11685.937 - 11738.577: 94.2218% ( 11) 00:11:32.844 11738.577 - 11791.216: 94.2605% ( 5) 00:11:32.844 11791.216 - 11843.855: 94.2992% ( 5) 00:11:32.844 11843.855 - 11896.495: 94.3301% ( 4) 00:11:32.844 11896.495 - 11949.134: 94.3765% ( 6) 00:11:32.844 11949.134 - 12001.773: 94.3843% ( 1) 00:11:32.844 12001.773 - 12054.413: 94.3920% ( 1) 00:11:32.844 12054.413 - 12107.052: 94.4075% ( 2) 00:11:32.844 12107.052 - 12159.692: 94.4152% ( 1) 00:11:32.844 12212.331 - 12264.970: 94.4384% ( 3) 00:11:32.844 12264.970 - 12317.610: 94.4462% ( 1) 00:11:32.844 12317.610 - 12370.249: 94.4616% ( 2) 00:11:32.844 12370.249 - 12422.888: 94.4771% ( 2) 00:11:32.844 12422.888 - 12475.528: 94.4848% ( 1) 00:11:32.844 12475.528 - 12528.167: 94.4926% ( 1) 00:11:32.844 12528.167 - 12580.806: 94.5158% ( 3) 00:11:32.844 12580.806 - 12633.446: 94.5545% ( 5) 00:11:32.844 12633.446 - 12686.085: 94.6086% ( 7) 00:11:32.844 12686.085 - 12738.724: 94.6241% ( 2) 00:11:32.844 12738.724 - 12791.364: 94.6627% ( 5) 00:11:32.844 12791.364 - 12844.003: 94.7092% ( 6) 00:11:32.844 12844.003 - 12896.643: 94.7865% ( 10) 00:11:32.844 12896.643 - 12949.282: 94.8716% ( 11) 00:11:32.844 12949.282 - 13001.921: 94.9335% ( 8) 00:11:32.844 13001.921 - 13054.561: 95.1810% ( 32) 00:11:32.844 13054.561 - 13107.200: 95.2042% ( 3) 00:11:32.844 13107.200 - 13159.839: 95.2584% ( 7) 00:11:32.844 13159.839 - 13212.479: 95.2970% ( 5) 00:11:32.844 13212.479 - 13265.118: 95.3512% ( 7) 00:11:32.844 13265.118 - 13317.757: 95.3976% ( 6) 00:11:32.844 13317.757 - 13370.397: 95.4827% ( 11) 00:11:32.844 13370.397 - 13423.036: 95.5523% ( 9) 00:11:32.844 13423.036 - 13475.676: 95.6142% ( 8) 00:11:32.844 13475.676 - 13580.954: 95.7147% ( 13) 00:11:32.844 13580.954 - 13686.233: 95.8230% ( 14) 00:11:32.844 13686.233 - 13791.512: 95.8926% ( 9) 00:11:32.844 13791.512 - 13896.790: 95.9855% ( 12) 00:11:32.844 13896.790 - 14002.069: 96.0938% ( 14) 00:11:32.844 14002.069 - 14107.348: 96.1866% ( 12) 00:11:32.844 14107.348 - 14212.627: 96.3490% ( 21) 00:11:32.844 14212.627 - 14317.905: 96.4650% ( 15) 00:11:32.844 14317.905 - 14423.184: 96.5269% ( 8) 00:11:32.844 14423.184 - 14528.463: 96.5733% ( 6) 00:11:32.844 14528.463 - 14633.741: 96.6120% ( 5) 00:11:32.844 14633.741 - 14739.020: 96.6816% ( 9) 00:11:32.844 14739.020 - 14844.299: 96.7048% ( 3) 00:11:32.844 14844.299 - 14949.578: 96.7203% ( 2) 00:11:32.844 14949.578 - 15054.856: 96.7512% ( 4) 00:11:32.844 15054.856 - 15160.135: 96.9291% ( 23) 00:11:32.844 15160.135 - 15265.414: 97.0220% ( 12) 00:11:32.844 15265.414 - 15370.692: 97.0838% ( 8) 00:11:32.844 15370.692 - 15475.971: 97.1148% ( 4) 00:11:32.844 15475.971 - 15581.250: 97.1457% ( 4) 00:11:32.844 15581.250 - 15686.529: 97.1767% ( 4) 00:11:32.844 15686.529 - 15791.807: 97.2153% ( 5) 00:11:32.844 15791.807 - 15897.086: 97.2772% ( 8) 00:11:32.844 15897.086 - 16002.365: 97.4551% ( 23) 00:11:32.844 16107.643 - 16212.922: 97.4706% ( 2) 00:11:32.844 16212.922 - 16318.201: 97.4861% ( 2) 00:11:32.844 16318.201 - 16423.480: 97.5015% ( 2) 00:11:32.844 16423.480 - 16528.758: 97.5170% ( 2) 00:11:32.845 16528.758 - 16634.037: 97.5248% ( 1) 00:11:32.845 18529.054 - 18634.333: 97.5866% ( 8) 00:11:32.845 18634.333 - 18739.611: 97.6098% ( 3) 00:11:32.845 18739.611 - 18844.890: 97.7491% ( 18) 00:11:32.845 18844.890 - 18950.169: 97.8574% ( 14) 00:11:32.845 18950.169 - 19055.447: 97.9115% ( 7) 00:11:32.845 19055.447 - 19160.726: 97.9889% ( 10) 00:11:32.845 19160.726 - 19266.005: 98.0430% ( 7) 00:11:32.845 19266.005 - 19371.284: 98.0894% ( 6) 00:11:32.845 19371.284 - 19476.562: 98.1281% ( 5) 00:11:32.845 19476.562 - 19581.841: 98.1590% ( 4) 00:11:32.845 19581.841 - 19687.120: 98.2209% ( 8) 00:11:32.845 19687.120 - 19792.398: 98.2673% ( 6) 00:11:32.845 19792.398 - 19897.677: 98.3524% ( 11) 00:11:32.845 19897.677 - 20002.956: 98.4298% ( 10) 00:11:32.845 20002.956 - 20108.235: 98.4607% ( 4) 00:11:32.845 20108.235 - 20213.513: 98.5149% ( 7) 00:11:32.845 20213.513 - 20318.792: 98.5381% ( 3) 00:11:32.845 20318.792 - 20424.071: 98.6463% ( 14) 00:11:32.845 20424.071 - 20529.349: 98.8165% ( 22) 00:11:32.845 20529.349 - 20634.628: 98.8707% ( 7) 00:11:32.845 20634.628 - 20739.907: 98.9016% ( 4) 00:11:32.845 20739.907 - 20845.186: 98.9171% ( 2) 00:11:32.845 20845.186 - 20950.464: 98.9248% ( 1) 00:11:32.845 20950.464 - 21055.743: 98.9480% ( 3) 00:11:32.845 21055.743 - 21161.022: 98.9712% ( 3) 00:11:32.845 21266.300 - 21371.579: 98.9790% ( 1) 00:11:32.845 21371.579 - 21476.858: 98.9867% ( 1) 00:11:32.845 21476.858 - 21582.137: 98.9944% ( 1) 00:11:32.845 21582.137 - 21687.415: 99.0022% ( 1) 00:11:32.845 22108.530 - 22213.809: 99.0099% ( 1) 00:11:32.845 32425.844 - 32636.402: 99.0176% ( 1) 00:11:32.845 32636.402 - 32846.959: 99.1337% ( 15) 00:11:32.845 32846.959 - 33057.516: 99.1955% ( 8) 00:11:32.845 33057.516 - 33268.074: 99.2497% ( 7) 00:11:32.845 33268.074 - 33478.631: 99.3116% ( 8) 00:11:32.845 33478.631 - 33689.189: 99.3502% ( 5) 00:11:32.845 33689.189 - 33899.746: 99.3967% ( 6) 00:11:32.845 33899.746 - 34110.304: 99.4663% ( 9) 00:11:32.845 34110.304 - 34320.861: 99.5050% ( 5) 00:11:32.845 40848.141 - 41058.699: 99.5127% ( 1) 00:11:32.845 41058.699 - 41269.256: 99.5668% ( 7) 00:11:32.845 41269.256 - 41479.814: 99.6287% ( 8) 00:11:32.845 41479.814 - 41690.371: 99.6829% ( 7) 00:11:32.845 41690.371 - 41900.929: 99.7370% ( 7) 00:11:32.845 41900.929 - 42111.486: 99.7912% ( 7) 00:11:32.845 42111.486 - 42322.043: 99.8530% ( 8) 00:11:32.845 42322.043 - 42532.601: 99.9149% ( 8) 00:11:32.845 42532.601 - 42743.158: 99.9691% ( 7) 00:11:32.845 42743.158 - 42953.716: 100.0000% ( 4) 00:11:32.845 00:11:32.845 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:32.845 ============================================================================== 00:11:32.845 Range in us Cumulative IO count 00:11:32.845 7737.986 - 7790.625: 0.0387% ( 5) 00:11:32.845 7790.625 - 7843.264: 0.0774% ( 5) 00:11:32.845 7843.264 - 7895.904: 0.1470% ( 9) 00:11:32.845 7895.904 - 7948.543: 0.2398% ( 12) 00:11:32.845 7948.543 - 8001.182: 0.4022% ( 21) 00:11:32.845 8001.182 - 8053.822: 0.4873% ( 11) 00:11:32.845 8053.822 - 8106.461: 0.6188% ( 17) 00:11:32.845 8106.461 - 8159.100: 0.7812% ( 21) 00:11:32.845 8159.100 - 8211.740: 1.1603% ( 49) 00:11:32.845 8211.740 - 8264.379: 1.7404% ( 75) 00:11:32.845 8264.379 - 8317.018: 2.4288% ( 89) 00:11:32.845 8317.018 - 8369.658: 2.8852% ( 59) 00:11:32.845 8369.658 - 8422.297: 3.3803% ( 64) 00:11:32.845 8422.297 - 8474.937: 4.0377% ( 85) 00:11:32.845 8474.937 - 8527.576: 4.8886% ( 110) 00:11:32.845 8527.576 - 8580.215: 5.5229% ( 82) 00:11:32.845 8580.215 - 8632.855: 6.3274% ( 104) 00:11:32.845 8632.855 - 8685.494: 7.4489% ( 145) 00:11:32.845 8685.494 - 8738.133: 8.5396% ( 141) 00:11:32.845 8738.133 - 8790.773: 10.6126% ( 268) 00:11:32.845 8790.773 - 8843.412: 12.6624% ( 265) 00:11:32.845 8843.412 - 8896.051: 14.5575% ( 245) 00:11:32.845 8896.051 - 8948.691: 17.5511% ( 387) 00:11:32.845 8948.691 - 9001.330: 20.1733% ( 339) 00:11:32.845 9001.330 - 9053.969: 23.1281% ( 382) 00:11:32.845 9053.969 - 9106.609: 25.9282% ( 362) 00:11:32.845 9106.609 - 9159.248: 29.4632% ( 457) 00:11:32.845 9159.248 - 9211.888: 33.5319% ( 526) 00:11:32.845 9211.888 - 9264.527: 37.4072% ( 501) 00:11:32.845 9264.527 - 9317.166: 41.7157% ( 557) 00:11:32.845 9317.166 - 9369.806: 46.4109% ( 607) 00:11:32.845 9369.806 - 9422.445: 50.7271% ( 558) 00:11:32.845 9422.445 - 9475.084: 55.2135% ( 580) 00:11:32.845 9475.084 - 9527.724: 59.8314% ( 597) 00:11:32.845 9527.724 - 9580.363: 64.3332% ( 582) 00:11:32.845 9580.363 - 9633.002: 68.2008% ( 500) 00:11:32.845 9633.002 - 9685.642: 72.2772% ( 527) 00:11:32.845 9685.642 - 9738.281: 75.9514% ( 475) 00:11:32.845 9738.281 - 9790.920: 78.6123% ( 344) 00:11:32.845 9790.920 - 9843.560: 80.8091% ( 284) 00:11:32.845 9843.560 - 9896.199: 83.1761% ( 306) 00:11:32.845 9896.199 - 9948.839: 84.9010% ( 223) 00:11:32.845 9948.839 - 10001.478: 86.5486% ( 213) 00:11:32.845 10001.478 - 10054.117: 87.8017% ( 162) 00:11:32.845 10054.117 - 10106.757: 88.7840% ( 127) 00:11:32.845 10106.757 - 10159.396: 89.5962% ( 105) 00:11:32.845 10159.396 - 10212.035: 90.3079% ( 92) 00:11:32.845 10212.035 - 10264.675: 90.7874% ( 62) 00:11:32.845 10264.675 - 10317.314: 91.2438% ( 59) 00:11:32.845 10317.314 - 10369.953: 91.4836% ( 31) 00:11:32.845 10369.953 - 10422.593: 91.6383% ( 20) 00:11:32.845 10422.593 - 10475.232: 91.9864% ( 45) 00:11:32.845 10475.232 - 10527.871: 92.1643% ( 23) 00:11:32.845 10527.871 - 10580.511: 92.3035% ( 18) 00:11:32.845 10580.511 - 10633.150: 92.4428% ( 18) 00:11:32.845 10633.150 - 10685.790: 92.5743% ( 17) 00:11:32.845 10685.790 - 10738.429: 92.7367% ( 21) 00:11:32.845 10738.429 - 10791.068: 92.7831% ( 6) 00:11:32.845 10791.068 - 10843.708: 92.8295% ( 6) 00:11:32.845 10843.708 - 10896.347: 92.8682% ( 5) 00:11:32.845 10896.347 - 10948.986: 92.8991% ( 4) 00:11:32.845 10948.986 - 11001.626: 92.9533% ( 7) 00:11:32.845 11001.626 - 11054.265: 93.0306% ( 10) 00:11:32.845 11054.265 - 11106.904: 93.0848% ( 7) 00:11:32.845 11106.904 - 11159.544: 93.1235% ( 5) 00:11:32.845 11159.544 - 11212.183: 93.1853% ( 8) 00:11:32.845 11212.183 - 11264.822: 93.2472% ( 8) 00:11:32.845 11264.822 - 11317.462: 93.3323% ( 11) 00:11:32.845 11317.462 - 11370.101: 93.4329% ( 13) 00:11:32.845 11370.101 - 11422.741: 93.5798% ( 19) 00:11:32.845 11422.741 - 11475.380: 93.7345% ( 20) 00:11:32.845 11475.380 - 11528.019: 93.9279% ( 25) 00:11:32.845 11528.019 - 11580.659: 94.0439% ( 15) 00:11:32.845 11580.659 - 11633.298: 94.1677% ( 16) 00:11:32.845 11633.298 - 11685.937: 94.2992% ( 17) 00:11:32.845 11685.937 - 11738.577: 94.3843% ( 11) 00:11:32.845 11738.577 - 11791.216: 94.4771% ( 12) 00:11:32.845 11791.216 - 11843.855: 94.5312% ( 7) 00:11:32.845 11843.855 - 11896.495: 94.5777% ( 6) 00:11:32.845 11896.495 - 11949.134: 94.6241% ( 6) 00:11:32.845 11949.134 - 12001.773: 94.6705% ( 6) 00:11:32.845 12001.773 - 12054.413: 94.7092% ( 5) 00:11:32.845 12054.413 - 12107.052: 94.7401% ( 4) 00:11:32.845 12107.052 - 12159.692: 94.7710% ( 4) 00:11:32.845 12159.692 - 12212.331: 94.8097% ( 5) 00:11:32.845 12212.331 - 12264.970: 94.8407% ( 4) 00:11:32.845 12264.970 - 12317.610: 94.8716% ( 4) 00:11:32.845 12317.610 - 12370.249: 94.9489% ( 10) 00:11:32.845 12370.249 - 12422.888: 94.9644% ( 2) 00:11:32.845 12422.888 - 12475.528: 94.9799% ( 2) 00:11:32.845 12475.528 - 12528.167: 94.9876% ( 1) 00:11:32.845 12528.167 - 12580.806: 95.0031% ( 2) 00:11:32.845 12580.806 - 12633.446: 95.0186% ( 2) 00:11:32.845 12633.446 - 12686.085: 95.0340% ( 2) 00:11:32.845 12686.085 - 12738.724: 95.0495% ( 2) 00:11:32.845 12738.724 - 12791.364: 95.0572% ( 1) 00:11:32.845 12844.003 - 12896.643: 95.0804% ( 3) 00:11:32.845 12896.643 - 12949.282: 95.1578% ( 10) 00:11:32.845 12949.282 - 13001.921: 95.2351% ( 10) 00:11:32.845 13001.921 - 13054.561: 95.3125% ( 10) 00:11:32.845 13054.561 - 13107.200: 95.3821% ( 9) 00:11:32.845 13107.200 - 13159.839: 95.4285% ( 6) 00:11:32.845 13159.839 - 13212.479: 95.4827% ( 7) 00:11:32.845 13212.479 - 13265.118: 95.5291% ( 6) 00:11:32.845 13265.118 - 13317.757: 95.5446% ( 2) 00:11:32.845 13317.757 - 13370.397: 95.5523% ( 1) 00:11:32.845 13686.233 - 13791.512: 95.5600% ( 1) 00:11:32.845 14002.069 - 14107.348: 95.5987% ( 5) 00:11:32.845 14107.348 - 14212.627: 95.6838% ( 11) 00:11:32.845 14212.627 - 14317.905: 95.8153% ( 17) 00:11:32.845 14317.905 - 14423.184: 95.9468% ( 17) 00:11:32.845 14423.184 - 14528.463: 96.0783% ( 17) 00:11:32.845 14528.463 - 14633.741: 96.2175% ( 18) 00:11:32.845 14633.741 - 14739.020: 96.3954% ( 23) 00:11:32.845 14739.020 - 14844.299: 96.5579% ( 21) 00:11:32.845 14844.299 - 14949.578: 96.7976% ( 31) 00:11:32.845 14949.578 - 15054.856: 97.0529% ( 33) 00:11:32.845 15054.856 - 15160.135: 97.1999% ( 19) 00:11:32.845 15160.135 - 15265.414: 97.2695% ( 9) 00:11:32.845 15265.414 - 15370.692: 97.3468% ( 10) 00:11:32.845 15370.692 - 15475.971: 97.4165% ( 9) 00:11:32.845 15475.971 - 15581.250: 97.4551% ( 5) 00:11:32.845 15581.250 - 15686.529: 97.4861% ( 4) 00:11:32.845 15686.529 - 15791.807: 97.5093% ( 3) 00:11:32.845 15791.807 - 15897.086: 97.5248% ( 2) 00:11:32.845 18423.775 - 18529.054: 97.6253% ( 13) 00:11:32.845 18529.054 - 18634.333: 97.7645% ( 18) 00:11:32.845 18634.333 - 18739.611: 97.9347% ( 22) 00:11:32.845 18739.611 - 18844.890: 97.9579% ( 3) 00:11:32.845 18844.890 - 18950.169: 97.9734% ( 2) 00:11:32.845 18950.169 - 19055.447: 97.9889% ( 2) 00:11:32.845 19055.447 - 19160.726: 98.0043% ( 2) 00:11:32.845 19160.726 - 19266.005: 98.0198% ( 2) 00:11:32.845 19897.677 - 20002.956: 98.0275% ( 1) 00:11:32.845 20002.956 - 20108.235: 98.0353% ( 1) 00:11:32.845 20108.235 - 20213.513: 98.0662% ( 4) 00:11:32.845 20213.513 - 20318.792: 98.1281% ( 8) 00:11:32.845 20318.792 - 20424.071: 98.1745% ( 6) 00:11:32.845 20424.071 - 20529.349: 98.4916% ( 41) 00:11:32.845 20529.349 - 20634.628: 98.5845% ( 12) 00:11:32.846 20634.628 - 20739.907: 98.6309% ( 6) 00:11:32.846 20739.907 - 20845.186: 98.7082% ( 10) 00:11:32.846 20845.186 - 20950.464: 98.7856% ( 10) 00:11:32.846 20950.464 - 21055.743: 98.8552% ( 9) 00:11:32.846 21055.743 - 21161.022: 98.8939% ( 5) 00:11:32.846 21161.022 - 21266.300: 98.9325% ( 5) 00:11:32.846 21266.300 - 21371.579: 98.9867% ( 7) 00:11:32.846 21371.579 - 21476.858: 99.0099% ( 3) 00:11:32.846 30741.385 - 30951.942: 99.0486% ( 5) 00:11:32.846 30951.942 - 31162.500: 99.1027% ( 7) 00:11:32.846 31162.500 - 31373.057: 99.1646% ( 8) 00:11:32.846 31373.057 - 31583.614: 99.2188% ( 7) 00:11:32.846 31583.614 - 31794.172: 99.2806% ( 8) 00:11:32.846 31794.172 - 32004.729: 99.3425% ( 8) 00:11:32.846 32004.729 - 32215.287: 99.4044% ( 8) 00:11:32.846 32215.287 - 32425.844: 99.4663% ( 8) 00:11:32.846 32425.844 - 32636.402: 99.5050% ( 5) 00:11:32.846 39163.682 - 39374.239: 99.5668% ( 8) 00:11:32.846 39374.239 - 39584.797: 99.6287% ( 8) 00:11:32.846 39584.797 - 39795.354: 99.6829% ( 7) 00:11:32.846 39795.354 - 40005.912: 99.7447% ( 8) 00:11:32.846 40005.912 - 40216.469: 99.8066% ( 8) 00:11:32.846 40216.469 - 40427.027: 99.8608% ( 7) 00:11:32.846 40427.027 - 40637.584: 99.9304% ( 9) 00:11:32.846 40637.584 - 40848.141: 99.9923% ( 8) 00:11:32.846 40848.141 - 41058.699: 100.0000% ( 1) 00:11:32.846 00:11:32.846 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:32.846 ============================================================================== 00:11:32.846 Range in us Cumulative IO count 00:11:32.846 7527.428 - 7580.067: 0.0155% ( 2) 00:11:32.846 7685.346 - 7737.986: 0.0232% ( 1) 00:11:32.846 7737.986 - 7790.625: 0.1392% ( 15) 00:11:32.846 7790.625 - 7843.264: 0.2939% ( 20) 00:11:32.846 7843.264 - 7895.904: 0.5260% ( 30) 00:11:32.846 7895.904 - 7948.543: 0.9824% ( 59) 00:11:32.846 7948.543 - 8001.182: 1.2531% ( 35) 00:11:32.846 8001.182 - 8053.822: 1.4774% ( 29) 00:11:32.846 8053.822 - 8106.461: 1.6631% ( 24) 00:11:32.846 8106.461 - 8159.100: 1.8951% ( 30) 00:11:32.846 8159.100 - 8211.740: 2.0034% ( 14) 00:11:32.846 8211.740 - 8264.379: 2.1117% ( 14) 00:11:32.846 8264.379 - 8317.018: 2.4288% ( 41) 00:11:32.846 8317.018 - 8369.658: 2.7073% ( 36) 00:11:32.846 8369.658 - 8422.297: 3.1869% ( 62) 00:11:32.846 8422.297 - 8474.937: 3.7902% ( 78) 00:11:32.846 8474.937 - 8527.576: 4.5869% ( 103) 00:11:32.846 8527.576 - 8580.215: 5.4610% ( 113) 00:11:32.846 8580.215 - 8632.855: 6.5749% ( 144) 00:11:32.846 8632.855 - 8685.494: 7.8048% ( 159) 00:11:32.846 8685.494 - 8738.133: 9.7386% ( 250) 00:11:32.846 8738.133 - 8790.773: 11.6491% ( 247) 00:11:32.846 8790.773 - 8843.412: 13.7918% ( 277) 00:11:32.846 8843.412 - 8896.051: 15.8416% ( 265) 00:11:32.846 8896.051 - 8948.691: 18.1235% ( 295) 00:11:32.846 8948.691 - 9001.330: 20.4131% ( 296) 00:11:32.846 9001.330 - 9053.969: 23.0894% ( 346) 00:11:32.846 9053.969 - 9106.609: 26.4233% ( 431) 00:11:32.846 9106.609 - 9159.248: 30.0897% ( 474) 00:11:32.846 9159.248 - 9211.888: 33.8103% ( 481) 00:11:32.846 9211.888 - 9264.527: 38.0415% ( 547) 00:11:32.846 9264.527 - 9317.166: 42.2030% ( 538) 00:11:32.846 9317.166 - 9369.806: 46.6816% ( 579) 00:11:32.846 9369.806 - 9422.445: 51.3923% ( 609) 00:11:32.846 9422.445 - 9475.084: 55.9251% ( 586) 00:11:32.846 9475.084 - 9527.724: 60.2645% ( 561) 00:11:32.846 9527.724 - 9580.363: 64.2481% ( 515) 00:11:32.846 9580.363 - 9633.002: 68.1467% ( 504) 00:11:32.846 9633.002 - 9685.642: 70.7998% ( 343) 00:11:32.846 9685.642 - 9738.281: 73.5535% ( 356) 00:11:32.846 9738.281 - 9790.920: 76.2995% ( 355) 00:11:32.846 9790.920 - 9843.560: 78.4499% ( 278) 00:11:32.846 9843.560 - 9896.199: 80.6776% ( 288) 00:11:32.846 9896.199 - 9948.839: 82.3716% ( 219) 00:11:32.846 9948.839 - 10001.478: 84.4137% ( 264) 00:11:32.846 10001.478 - 10054.117: 86.0149% ( 207) 00:11:32.846 10054.117 - 10106.757: 87.5232% ( 195) 00:11:32.846 10106.757 - 10159.396: 88.5210% ( 129) 00:11:32.846 10159.396 - 10212.035: 89.1476% ( 81) 00:11:32.846 10212.035 - 10264.675: 89.6117% ( 60) 00:11:32.846 10264.675 - 10317.314: 90.0217% ( 53) 00:11:32.846 10317.314 - 10369.953: 90.4935% ( 61) 00:11:32.846 10369.953 - 10422.593: 91.0350% ( 70) 00:11:32.846 10422.593 - 10475.232: 91.5532% ( 67) 00:11:32.846 10475.232 - 10527.871: 91.9632% ( 53) 00:11:32.846 10527.871 - 10580.511: 92.2416% ( 36) 00:11:32.846 10580.511 - 10633.150: 92.4118% ( 22) 00:11:32.846 10633.150 - 10685.790: 92.5665% ( 20) 00:11:32.846 10685.790 - 10738.429: 92.7135% ( 19) 00:11:32.846 10738.429 - 10791.068: 92.8140% ( 13) 00:11:32.846 10791.068 - 10843.708: 93.0925% ( 36) 00:11:32.846 10843.708 - 10896.347: 93.2395% ( 19) 00:11:32.846 10896.347 - 10948.986: 93.3323% ( 12) 00:11:32.846 10948.986 - 11001.626: 93.4638% ( 17) 00:11:32.846 11001.626 - 11054.265: 93.5566% ( 12) 00:11:32.846 11054.265 - 11106.904: 93.6572% ( 13) 00:11:32.846 11106.904 - 11159.544: 93.7887% ( 17) 00:11:32.846 11159.544 - 11212.183: 93.8970% ( 14) 00:11:32.846 11212.183 - 11264.822: 93.9279% ( 4) 00:11:32.846 11264.822 - 11317.462: 93.9898% ( 8) 00:11:32.846 11317.462 - 11370.101: 94.0285% ( 5) 00:11:32.846 11370.101 - 11422.741: 94.0826% ( 7) 00:11:32.846 11422.741 - 11475.380: 94.1368% ( 7) 00:11:32.846 11475.380 - 11528.019: 94.1832% ( 6) 00:11:32.846 11528.019 - 11580.659: 94.2837% ( 13) 00:11:32.846 11580.659 - 11633.298: 94.4075% ( 16) 00:11:32.846 11633.298 - 11685.937: 94.5158% ( 14) 00:11:32.846 11685.937 - 11738.577: 94.6009% ( 11) 00:11:32.846 11738.577 - 11791.216: 94.7169% ( 15) 00:11:32.846 11791.216 - 11843.855: 94.8716% ( 20) 00:11:32.846 11843.855 - 11896.495: 94.9257% ( 7) 00:11:32.846 11896.495 - 11949.134: 94.9412% ( 2) 00:11:32.846 11949.134 - 12001.773: 94.9567% ( 2) 00:11:32.846 12001.773 - 12054.413: 94.9722% ( 2) 00:11:32.846 12054.413 - 12107.052: 94.9876% ( 2) 00:11:32.846 12107.052 - 12159.692: 94.9954% ( 1) 00:11:32.846 12159.692 - 12212.331: 95.0186% ( 3) 00:11:32.846 12212.331 - 12264.970: 95.0340% ( 2) 00:11:32.846 12264.970 - 12317.610: 95.0572% ( 3) 00:11:32.846 12317.610 - 12370.249: 95.1037% ( 6) 00:11:32.846 12370.249 - 12422.888: 95.1655% ( 8) 00:11:32.846 12422.888 - 12475.528: 95.2429% ( 10) 00:11:32.846 12475.528 - 12528.167: 95.4208% ( 23) 00:11:32.846 12528.167 - 12580.806: 95.4595% ( 5) 00:11:32.846 12580.806 - 12633.446: 95.4827% ( 3) 00:11:32.846 12633.446 - 12686.085: 95.5136% ( 4) 00:11:32.846 12686.085 - 12738.724: 95.5368% ( 3) 00:11:32.846 12738.724 - 12791.364: 95.5446% ( 1) 00:11:32.846 13370.397 - 13423.036: 95.5523% ( 1) 00:11:32.846 13423.036 - 13475.676: 95.5755% ( 3) 00:11:32.846 13475.676 - 13580.954: 95.6451% ( 9) 00:11:32.846 13580.954 - 13686.233: 95.7457% ( 13) 00:11:32.846 13686.233 - 13791.512: 95.8308% ( 11) 00:11:32.846 13791.512 - 13896.790: 95.8617% ( 4) 00:11:32.846 13896.790 - 14002.069: 95.8849% ( 3) 00:11:32.846 14002.069 - 14107.348: 95.9158% ( 4) 00:11:32.846 14107.348 - 14212.627: 96.1170% ( 26) 00:11:32.846 14212.627 - 14317.905: 96.2949% ( 23) 00:11:32.846 14317.905 - 14423.184: 96.3800% ( 11) 00:11:32.846 14423.184 - 14528.463: 96.4496% ( 9) 00:11:32.846 14528.463 - 14633.741: 96.5347% ( 11) 00:11:32.846 14633.741 - 14739.020: 96.6197% ( 11) 00:11:32.846 14739.020 - 14844.299: 96.7048% ( 11) 00:11:32.846 14844.299 - 14949.578: 96.8827% ( 23) 00:11:32.846 14949.578 - 15054.856: 96.9524% ( 9) 00:11:32.846 15054.856 - 15160.135: 96.9833% ( 4) 00:11:32.846 15160.135 - 15265.414: 97.0529% ( 9) 00:11:32.846 15265.414 - 15370.692: 97.0916% ( 5) 00:11:32.846 15370.692 - 15475.971: 97.1148% ( 3) 00:11:32.846 15475.971 - 15581.250: 97.1457% ( 4) 00:11:32.846 15581.250 - 15686.529: 97.1767% ( 4) 00:11:32.846 15686.529 - 15791.807: 97.1999% ( 3) 00:11:32.846 15791.807 - 15897.086: 97.2308% ( 4) 00:11:32.846 15897.086 - 16002.365: 97.2618% ( 4) 00:11:32.846 16002.365 - 16107.643: 97.3004% ( 5) 00:11:32.846 16107.643 - 16212.922: 97.3468% ( 6) 00:11:32.846 16212.922 - 16318.201: 97.4087% ( 8) 00:11:32.846 16318.201 - 16423.480: 97.4783% ( 9) 00:11:32.846 16423.480 - 16528.758: 97.5248% ( 6) 00:11:32.846 17370.988 - 17476.267: 97.5402% ( 2) 00:11:32.846 17476.267 - 17581.545: 97.5634% ( 3) 00:11:32.846 17581.545 - 17686.824: 97.5944% ( 4) 00:11:32.846 17686.824 - 17792.103: 97.6717% ( 10) 00:11:32.846 17792.103 - 17897.382: 97.8419% ( 22) 00:11:32.846 17897.382 - 18002.660: 97.8883% ( 6) 00:11:32.846 18002.660 - 18107.939: 97.9038% ( 2) 00:11:32.846 18107.939 - 18213.218: 97.9192% ( 2) 00:11:32.846 18213.218 - 18318.496: 97.9425% ( 3) 00:11:32.846 18318.496 - 18423.775: 97.9579% ( 2) 00:11:32.846 18423.775 - 18529.054: 97.9734% ( 2) 00:11:32.846 18529.054 - 18634.333: 97.9889% ( 2) 00:11:32.846 18634.333 - 18739.611: 98.0121% ( 3) 00:11:32.846 18739.611 - 18844.890: 98.0198% ( 1) 00:11:32.846 18844.890 - 18950.169: 98.0275% ( 1) 00:11:32.846 18950.169 - 19055.447: 98.0353% ( 1) 00:11:32.846 19160.726 - 19266.005: 98.0430% ( 1) 00:11:32.846 19476.562 - 19581.841: 98.0585% ( 2) 00:11:32.846 19581.841 - 19687.120: 98.1049% ( 6) 00:11:32.846 19687.120 - 19792.398: 98.1204% ( 2) 00:11:32.846 19792.398 - 19897.677: 98.1513% ( 4) 00:11:32.846 19897.677 - 20002.956: 98.1745% ( 3) 00:11:32.846 20002.956 - 20108.235: 98.2054% ( 4) 00:11:32.846 20108.235 - 20213.513: 98.2673% ( 8) 00:11:32.846 20213.513 - 20318.792: 98.4143% ( 19) 00:11:32.846 20318.792 - 20424.071: 98.4684% ( 7) 00:11:32.846 20424.071 - 20529.349: 98.5149% ( 6) 00:11:32.846 20529.349 - 20634.628: 98.5535% ( 5) 00:11:32.846 20634.628 - 20739.907: 98.6463% ( 12) 00:11:32.846 20739.907 - 20845.186: 98.8243% ( 23) 00:11:32.846 20845.186 - 20950.464: 98.8552% ( 4) 00:11:32.846 20950.464 - 21055.743: 98.8707% ( 2) 00:11:32.846 21055.743 - 21161.022: 98.9016% ( 4) 00:11:32.846 21161.022 - 21266.300: 98.9325% ( 4) 00:11:32.846 21266.300 - 21371.579: 98.9558% ( 3) 00:11:32.846 21371.579 - 21476.858: 98.9867% ( 4) 00:11:32.846 21476.858 - 21582.137: 99.0022% ( 2) 00:11:32.846 21582.137 - 21687.415: 99.0099% ( 1) 00:11:32.846 30530.827 - 30741.385: 99.0718% ( 8) 00:11:32.846 30741.385 - 30951.942: 99.1337% ( 8) 00:11:32.847 30951.942 - 31162.500: 99.1955% ( 8) 00:11:32.847 31162.500 - 31373.057: 99.2497% ( 7) 00:11:32.847 31373.057 - 31583.614: 99.3116% ( 8) 00:11:32.847 31583.614 - 31794.172: 99.3735% ( 8) 00:11:32.847 31794.172 - 32004.729: 99.4353% ( 8) 00:11:32.847 32004.729 - 32215.287: 99.4972% ( 8) 00:11:32.847 32215.287 - 32425.844: 99.5050% ( 1) 00:11:32.847 38321.452 - 38532.010: 99.5282% ( 3) 00:11:32.847 38532.010 - 38742.567: 99.5900% ( 8) 00:11:32.847 38742.567 - 38953.124: 99.6442% ( 7) 00:11:32.847 38953.124 - 39163.682: 99.7061% ( 8) 00:11:32.847 39163.682 - 39374.239: 99.7602% ( 7) 00:11:32.847 39374.239 - 39584.797: 99.8144% ( 7) 00:11:32.847 39584.797 - 39795.354: 99.8840% ( 9) 00:11:32.847 39795.354 - 40005.912: 99.9459% ( 8) 00:11:32.847 40005.912 - 40216.469: 100.0000% ( 7) 00:11:32.847 00:11:32.847 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:32.847 ============================================================================== 00:11:32.847 Range in us Cumulative IO count 00:11:32.847 7580.067 - 7632.707: 0.0077% ( 1) 00:11:32.847 7790.625 - 7843.264: 0.0696% ( 8) 00:11:32.847 7843.264 - 7895.904: 0.2088% ( 18) 00:11:32.847 7895.904 - 7948.543: 0.3713% ( 21) 00:11:32.847 7948.543 - 8001.182: 0.6884% ( 41) 00:11:32.847 8001.182 - 8053.822: 0.8354% ( 19) 00:11:32.847 8053.822 - 8106.461: 0.9978% ( 21) 00:11:32.847 8106.461 - 8159.100: 1.2222% ( 29) 00:11:32.847 8159.100 - 8211.740: 1.4542% ( 30) 00:11:32.847 8211.740 - 8264.379: 1.6940% ( 31) 00:11:32.847 8264.379 - 8317.018: 1.9647% ( 35) 00:11:32.847 8317.018 - 8369.658: 2.3051% ( 44) 00:11:32.847 8369.658 - 8422.297: 2.8079% ( 65) 00:11:32.847 8422.297 - 8474.937: 3.2410% ( 56) 00:11:32.847 8474.937 - 8527.576: 3.9062% ( 86) 00:11:32.847 8527.576 - 8580.215: 4.5173% ( 79) 00:11:32.847 8580.215 - 8632.855: 5.6235% ( 143) 00:11:32.847 8632.855 - 8685.494: 6.7296% ( 143) 00:11:32.847 8685.494 - 8738.133: 8.0059% ( 165) 00:11:32.847 8738.133 - 8790.773: 9.5606% ( 201) 00:11:32.847 8790.773 - 8843.412: 11.4867% ( 249) 00:11:32.847 8843.412 - 8896.051: 13.3586% ( 242) 00:11:32.847 8896.051 - 8948.691: 16.0350% ( 346) 00:11:32.847 8948.691 - 9001.330: 18.6726% ( 341) 00:11:32.847 9001.330 - 9053.969: 21.8131% ( 406) 00:11:32.847 9053.969 - 9106.609: 24.9845% ( 410) 00:11:32.847 9106.609 - 9159.248: 28.7980% ( 493) 00:11:32.847 9159.248 - 9211.888: 32.9827% ( 541) 00:11:32.847 9211.888 - 9264.527: 37.6934% ( 609) 00:11:32.847 9264.527 - 9317.166: 42.9610% ( 681) 00:11:32.847 9317.166 - 9369.806: 47.8187% ( 628) 00:11:32.847 9369.806 - 9422.445: 52.9316% ( 661) 00:11:32.847 9422.445 - 9475.084: 57.3871% ( 576) 00:11:32.847 9475.084 - 9527.724: 61.5950% ( 544) 00:11:32.847 9527.724 - 9580.363: 65.1918% ( 465) 00:11:32.847 9580.363 - 9633.002: 69.0362% ( 497) 00:11:32.847 9633.002 - 9685.642: 72.1767% ( 406) 00:11:32.847 9685.642 - 9738.281: 75.2011% ( 391) 00:11:32.847 9738.281 - 9790.920: 77.9780% ( 359) 00:11:32.847 9790.920 - 9843.560: 80.2522% ( 294) 00:11:32.847 9843.560 - 9896.199: 82.4644% ( 286) 00:11:32.847 9896.199 - 9948.839: 84.4214% ( 253) 00:11:32.847 9948.839 - 10001.478: 86.0845% ( 215) 00:11:32.847 10001.478 - 10054.117: 87.6238% ( 199) 00:11:32.847 10054.117 - 10106.757: 88.6835% ( 137) 00:11:32.847 10106.757 - 10159.396: 89.7819% ( 142) 00:11:32.847 10159.396 - 10212.035: 90.5322% ( 97) 00:11:32.847 10212.035 - 10264.675: 90.9267% ( 51) 00:11:32.847 10264.675 - 10317.314: 91.2593% ( 43) 00:11:32.847 10317.314 - 10369.953: 91.4681% ( 27) 00:11:32.847 10369.953 - 10422.593: 91.7543% ( 37) 00:11:32.847 10422.593 - 10475.232: 91.9787% ( 29) 00:11:32.847 10475.232 - 10527.871: 92.0869% ( 14) 00:11:32.847 10527.871 - 10580.511: 92.1875% ( 13) 00:11:32.847 10580.511 - 10633.150: 92.2649% ( 10) 00:11:32.847 10633.150 - 10685.790: 92.3654% ( 13) 00:11:32.847 10685.790 - 10738.429: 92.4505% ( 11) 00:11:32.847 10738.429 - 10791.068: 92.6593% ( 27) 00:11:32.847 10791.068 - 10843.708: 92.6980% ( 5) 00:11:32.847 10843.708 - 10896.347: 92.7367% ( 5) 00:11:32.847 10896.347 - 10948.986: 92.8373% ( 13) 00:11:32.847 10948.986 - 11001.626: 92.9455% ( 14) 00:11:32.847 11001.626 - 11054.265: 93.0538% ( 14) 00:11:32.847 11054.265 - 11106.904: 93.2085% ( 20) 00:11:32.847 11106.904 - 11159.544: 93.2936% ( 11) 00:11:32.847 11159.544 - 11212.183: 93.3864% ( 12) 00:11:32.847 11212.183 - 11264.822: 93.4793% ( 12) 00:11:32.847 11264.822 - 11317.462: 93.5644% ( 11) 00:11:32.847 11317.462 - 11370.101: 93.6262% ( 8) 00:11:32.847 11370.101 - 11422.741: 93.7036% ( 10) 00:11:32.847 11422.741 - 11475.380: 93.7655% ( 8) 00:11:32.847 11475.380 - 11528.019: 93.8351% ( 9) 00:11:32.847 11528.019 - 11580.659: 94.0439% ( 27) 00:11:32.847 11580.659 - 11633.298: 94.1290% ( 11) 00:11:32.847 11633.298 - 11685.937: 94.2218% ( 12) 00:11:32.847 11685.937 - 11738.577: 94.2992% ( 10) 00:11:32.847 11738.577 - 11791.216: 94.4307% ( 17) 00:11:32.847 11791.216 - 11843.855: 94.5467% ( 15) 00:11:32.847 11843.855 - 11896.495: 94.6473% ( 13) 00:11:32.847 11896.495 - 11949.134: 94.7401% ( 12) 00:11:32.847 11949.134 - 12001.773: 94.7865% ( 6) 00:11:32.847 12001.773 - 12054.413: 94.8484% ( 8) 00:11:32.847 12054.413 - 12107.052: 94.8716% ( 3) 00:11:32.847 12107.052 - 12159.692: 94.9180% ( 6) 00:11:32.847 12159.692 - 12212.331: 95.0108% ( 12) 00:11:32.847 12212.331 - 12264.970: 95.1191% ( 14) 00:11:32.847 12264.970 - 12317.610: 95.2197% ( 13) 00:11:32.847 12317.610 - 12370.249: 95.3434% ( 16) 00:11:32.847 12370.249 - 12422.888: 95.4131% ( 9) 00:11:32.847 12422.888 - 12475.528: 95.4827% ( 9) 00:11:32.847 12475.528 - 12528.167: 95.5213% ( 5) 00:11:32.847 12528.167 - 12580.806: 95.5446% ( 3) 00:11:32.847 12896.643 - 12949.282: 95.5600% ( 2) 00:11:32.847 12949.282 - 13001.921: 95.5678% ( 1) 00:11:32.847 13001.921 - 13054.561: 95.5832% ( 2) 00:11:32.847 13054.561 - 13107.200: 95.6451% ( 8) 00:11:32.847 13107.200 - 13159.839: 95.7534% ( 14) 00:11:32.847 13159.839 - 13212.479: 95.8308% ( 10) 00:11:32.847 13212.479 - 13265.118: 95.8926% ( 8) 00:11:32.847 13265.118 - 13317.757: 95.9545% ( 8) 00:11:32.847 13317.757 - 13370.397: 95.9700% ( 2) 00:11:32.847 13370.397 - 13423.036: 95.9777% ( 1) 00:11:32.847 13423.036 - 13475.676: 95.9932% ( 2) 00:11:32.847 13475.676 - 13580.954: 96.0164% ( 3) 00:11:32.847 13580.954 - 13686.233: 96.0396% ( 3) 00:11:32.847 13686.233 - 13791.512: 96.1092% ( 9) 00:11:32.847 13791.512 - 13896.790: 96.2330% ( 16) 00:11:32.847 13896.790 - 14002.069: 96.4341% ( 26) 00:11:32.847 14002.069 - 14107.348: 96.4650% ( 4) 00:11:32.847 14107.348 - 14212.627: 96.4882% ( 3) 00:11:32.847 14212.627 - 14317.905: 96.5114% ( 3) 00:11:32.847 14317.905 - 14423.184: 96.5347% ( 3) 00:11:32.847 15054.856 - 15160.135: 96.5888% ( 7) 00:11:32.847 15160.135 - 15265.414: 96.6662% ( 10) 00:11:32.847 15265.414 - 15370.692: 96.8131% ( 19) 00:11:32.847 15370.692 - 15475.971: 96.8905% ( 10) 00:11:32.847 15475.971 - 15581.250: 96.9678% ( 10) 00:11:32.847 15581.250 - 15686.529: 96.9988% ( 4) 00:11:32.847 15686.529 - 15791.807: 97.0220% ( 3) 00:11:32.847 15791.807 - 15897.086: 97.0297% ( 1) 00:11:32.847 16212.922 - 16318.201: 97.0452% ( 2) 00:11:32.847 16318.201 - 16423.480: 97.0684% ( 3) 00:11:32.847 16423.480 - 16528.758: 97.0993% ( 4) 00:11:32.847 16528.758 - 16634.037: 97.1225% ( 3) 00:11:32.847 16634.037 - 16739.316: 97.1612% ( 5) 00:11:32.847 16739.316 - 16844.594: 97.2231% ( 8) 00:11:32.847 16844.594 - 16949.873: 97.2772% ( 7) 00:11:32.847 16949.873 - 17055.152: 97.3236% ( 6) 00:11:32.847 17055.152 - 17160.431: 97.3623% ( 5) 00:11:32.847 17160.431 - 17265.709: 97.4087% ( 6) 00:11:32.847 17265.709 - 17370.988: 97.5248% ( 15) 00:11:32.847 17370.988 - 17476.267: 97.6408% ( 15) 00:11:32.847 17476.267 - 17581.545: 97.7568% ( 15) 00:11:32.847 17581.545 - 17686.824: 97.8960% ( 18) 00:11:32.847 17686.824 - 17792.103: 97.9347% ( 5) 00:11:32.847 17792.103 - 17897.382: 97.9657% ( 4) 00:11:32.847 17897.382 - 18002.660: 97.9811% ( 2) 00:11:32.847 18002.660 - 18107.939: 97.9966% ( 2) 00:11:32.847 18107.939 - 18213.218: 98.0198% ( 3) 00:11:32.847 19897.677 - 20002.956: 98.0585% ( 5) 00:11:32.847 20002.956 - 20108.235: 98.2441% ( 24) 00:11:32.847 20108.235 - 20213.513: 98.4143% ( 22) 00:11:32.847 20213.513 - 20318.792: 98.4762% ( 8) 00:11:32.847 20318.792 - 20424.071: 98.5303% ( 7) 00:11:32.847 20424.071 - 20529.349: 98.5690% ( 5) 00:11:32.847 20529.349 - 20634.628: 98.6154% ( 6) 00:11:32.847 20634.628 - 20739.907: 98.9016% ( 37) 00:11:32.847 20739.907 - 20845.186: 98.9248% ( 3) 00:11:32.847 20845.186 - 20950.464: 98.9480% ( 3) 00:11:32.847 20950.464 - 21055.743: 98.9558% ( 1) 00:11:32.847 21055.743 - 21161.022: 98.9712% ( 2) 00:11:32.847 21161.022 - 21266.300: 98.9867% ( 2) 00:11:32.847 21266.300 - 21371.579: 99.0022% ( 2) 00:11:32.847 21371.579 - 21476.858: 99.0099% ( 1) 00:11:32.847 28635.810 - 28846.368: 99.0331% ( 3) 00:11:32.847 28846.368 - 29056.925: 99.0873% ( 7) 00:11:32.847 29056.925 - 29267.483: 99.1414% ( 7) 00:11:32.847 29267.483 - 29478.040: 99.2033% ( 8) 00:11:32.847 29478.040 - 29688.598: 99.2729% ( 9) 00:11:32.847 29688.598 - 29899.155: 99.3348% ( 8) 00:11:32.847 29899.155 - 30109.712: 99.3967% ( 8) 00:11:32.847 30109.712 - 30320.270: 99.4585% ( 8) 00:11:32.847 30320.270 - 30530.827: 99.5050% ( 6) 00:11:32.847 36426.435 - 36636.993: 99.5514% ( 6) 00:11:32.847 36636.993 - 36847.550: 99.6132% ( 8) 00:11:32.847 36847.550 - 37058.108: 99.6751% ( 8) 00:11:32.847 37058.108 - 37268.665: 99.7370% ( 8) 00:11:32.847 37268.665 - 37479.222: 99.7989% ( 8) 00:11:32.847 37479.222 - 37689.780: 99.8608% ( 8) 00:11:32.847 37689.780 - 37900.337: 99.9226% ( 8) 00:11:32.847 37900.337 - 38110.895: 99.9845% ( 8) 00:11:32.847 38110.895 - 38321.452: 100.0000% ( 2) 00:11:32.847 00:11:32.847 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:32.848 ============================================================================== 00:11:32.848 Range in us Cumulative IO count 00:11:32.848 7632.707 - 7685.346: 0.0077% ( 1) 00:11:32.848 7685.346 - 7737.986: 0.0155% ( 1) 00:11:32.848 7737.986 - 7790.625: 0.0541% ( 5) 00:11:32.848 7790.625 - 7843.264: 0.1702% ( 15) 00:11:32.848 7843.264 - 7895.904: 0.2707% ( 13) 00:11:32.848 7895.904 - 7948.543: 0.5028% ( 30) 00:11:32.848 7948.543 - 8001.182: 0.6884% ( 24) 00:11:32.848 8001.182 - 8053.822: 0.8741% ( 24) 00:11:32.848 8053.822 - 8106.461: 1.1139% ( 31) 00:11:32.848 8106.461 - 8159.100: 1.3382% ( 29) 00:11:32.848 8159.100 - 8211.740: 1.5702% ( 30) 00:11:32.848 8211.740 - 8264.379: 1.7559% ( 24) 00:11:32.848 8264.379 - 8317.018: 1.9261% ( 22) 00:11:32.848 8317.018 - 8369.658: 2.1890% ( 34) 00:11:32.848 8369.658 - 8422.297: 2.5449% ( 46) 00:11:32.848 8422.297 - 8474.937: 3.0863% ( 70) 00:11:32.848 8474.937 - 8527.576: 3.8366% ( 97) 00:11:32.848 8527.576 - 8580.215: 4.6875% ( 110) 00:11:32.848 8580.215 - 8632.855: 5.7317% ( 135) 00:11:32.848 8632.855 - 8685.494: 6.8611% ( 146) 00:11:32.848 8685.494 - 8738.133: 8.3540% ( 193) 00:11:32.848 8738.133 - 8790.773: 9.9397% ( 205) 00:11:32.848 8790.773 - 8843.412: 11.7729% ( 237) 00:11:32.848 8843.412 - 8896.051: 13.9851% ( 286) 00:11:32.848 8896.051 - 8948.691: 16.0891% ( 272) 00:11:32.848 8948.691 - 9001.330: 18.7655% ( 346) 00:11:32.848 9001.330 - 9053.969: 21.7822% ( 390) 00:11:32.848 9053.969 - 9106.609: 25.1779% ( 439) 00:11:32.848 9106.609 - 9159.248: 29.1925% ( 519) 00:11:32.848 9159.248 - 9211.888: 33.1219% ( 508) 00:11:32.848 9211.888 - 9264.527: 37.3840% ( 551) 00:11:32.848 9264.527 - 9317.166: 41.9090% ( 585) 00:11:32.848 9317.166 - 9369.806: 46.4341% ( 585) 00:11:32.848 9369.806 - 9422.445: 50.9901% ( 589) 00:11:32.848 9422.445 - 9475.084: 56.0644% ( 656) 00:11:32.848 9475.084 - 9527.724: 61.0071% ( 639) 00:11:32.848 9527.724 - 9580.363: 65.4084% ( 569) 00:11:32.848 9580.363 - 9633.002: 69.8716% ( 577) 00:11:32.848 9633.002 - 9685.642: 73.1590% ( 425) 00:11:32.848 9685.642 - 9738.281: 75.9127% ( 356) 00:11:32.848 9738.281 - 9790.920: 78.1559% ( 290) 00:11:32.848 9790.920 - 9843.560: 80.3373% ( 282) 00:11:32.848 9843.560 - 9896.199: 82.3716% ( 263) 00:11:32.848 9896.199 - 9948.839: 84.1120% ( 225) 00:11:32.848 9948.839 - 10001.478: 86.1154% ( 259) 00:11:32.848 10001.478 - 10054.117: 87.7553% ( 212) 00:11:32.848 10054.117 - 10106.757: 89.2559% ( 194) 00:11:32.848 10106.757 - 10159.396: 90.1532% ( 116) 00:11:32.848 10159.396 - 10212.035: 90.7178% ( 73) 00:11:32.848 10212.035 - 10264.675: 91.1355% ( 54) 00:11:32.848 10264.675 - 10317.314: 91.5687% ( 56) 00:11:32.848 10317.314 - 10369.953: 91.7621% ( 25) 00:11:32.848 10369.953 - 10422.593: 91.9168% ( 20) 00:11:32.848 10422.593 - 10475.232: 92.0560% ( 18) 00:11:32.848 10475.232 - 10527.871: 92.1798% ( 16) 00:11:32.848 10527.871 - 10580.511: 92.3190% ( 18) 00:11:32.848 10580.511 - 10633.150: 92.5278% ( 27) 00:11:32.848 10633.150 - 10685.790: 92.6903% ( 21) 00:11:32.848 10685.790 - 10738.429: 92.8450% ( 20) 00:11:32.848 10738.429 - 10791.068: 92.9223% ( 10) 00:11:32.848 10791.068 - 10843.708: 92.9842% ( 8) 00:11:32.848 10843.708 - 10896.347: 93.0074% ( 3) 00:11:32.848 10896.347 - 10948.986: 93.0461% ( 5) 00:11:32.848 10948.986 - 11001.626: 93.0770% ( 4) 00:11:32.848 11001.626 - 11054.265: 93.1080% ( 4) 00:11:32.848 11054.265 - 11106.904: 93.1467% ( 5) 00:11:32.848 11106.904 - 11159.544: 93.1853% ( 5) 00:11:32.848 11159.544 - 11212.183: 93.2317% ( 6) 00:11:32.848 11212.183 - 11264.822: 93.2782% ( 6) 00:11:32.848 11264.822 - 11317.462: 93.4251% ( 19) 00:11:32.848 11317.462 - 11370.101: 93.5721% ( 19) 00:11:32.848 11370.101 - 11422.741: 93.6108% ( 5) 00:11:32.848 11422.741 - 11475.380: 93.6572% ( 6) 00:11:32.848 11475.380 - 11528.019: 93.6881% ( 4) 00:11:32.848 11528.019 - 11580.659: 93.7191% ( 4) 00:11:32.848 11580.659 - 11633.298: 93.7577% ( 5) 00:11:32.848 11633.298 - 11685.937: 93.7732% ( 2) 00:11:32.848 11685.937 - 11738.577: 93.7964% ( 3) 00:11:32.848 11738.577 - 11791.216: 93.8738% ( 10) 00:11:32.848 11791.216 - 11843.855: 93.9434% ( 9) 00:11:32.848 11843.855 - 11896.495: 93.9975% ( 7) 00:11:32.848 11896.495 - 11949.134: 94.0439% ( 6) 00:11:32.848 11949.134 - 12001.773: 94.1290% ( 11) 00:11:32.848 12001.773 - 12054.413: 94.1986% ( 9) 00:11:32.848 12054.413 - 12107.052: 94.3224% ( 16) 00:11:32.848 12107.052 - 12159.692: 94.5158% ( 25) 00:11:32.848 12159.692 - 12212.331: 94.6550% ( 18) 00:11:32.848 12212.331 - 12264.970: 94.8407% ( 24) 00:11:32.848 12264.970 - 12317.610: 94.9876% ( 19) 00:11:32.848 12317.610 - 12370.249: 95.1114% ( 16) 00:11:32.848 12370.249 - 12422.888: 95.2042% ( 12) 00:11:32.848 12422.888 - 12475.528: 95.2506% ( 6) 00:11:32.848 12475.528 - 12528.167: 95.3125% ( 8) 00:11:32.848 12528.167 - 12580.806: 95.3821% ( 9) 00:11:32.848 12580.806 - 12633.446: 95.4363% ( 7) 00:11:32.848 12633.446 - 12686.085: 95.5678% ( 17) 00:11:32.848 12686.085 - 12738.724: 95.6219% ( 7) 00:11:32.848 12738.724 - 12791.364: 95.7457% ( 16) 00:11:32.848 12791.364 - 12844.003: 95.8230% ( 10) 00:11:32.848 12844.003 - 12896.643: 95.8772% ( 7) 00:11:32.848 12896.643 - 12949.282: 95.9158% ( 5) 00:11:32.848 12949.282 - 13001.921: 95.9390% ( 3) 00:11:32.848 13001.921 - 13054.561: 95.9468% ( 1) 00:11:32.848 13054.561 - 13107.200: 95.9623% ( 2) 00:11:32.848 13107.200 - 13159.839: 95.9700% ( 1) 00:11:32.848 13159.839 - 13212.479: 95.9855% ( 2) 00:11:32.848 13212.479 - 13265.118: 95.9932% ( 1) 00:11:32.848 13265.118 - 13317.757: 96.0087% ( 2) 00:11:32.848 13317.757 - 13370.397: 96.0164% ( 1) 00:11:32.848 13370.397 - 13423.036: 96.0319% ( 2) 00:11:32.848 13423.036 - 13475.676: 96.0396% ( 1) 00:11:32.848 13580.954 - 13686.233: 96.1092% ( 9) 00:11:32.848 13686.233 - 13791.512: 96.2717% ( 21) 00:11:32.848 13791.512 - 13896.790: 96.4496% ( 23) 00:11:32.848 13896.790 - 14002.069: 96.4728% ( 3) 00:11:32.848 14002.069 - 14107.348: 96.4960% ( 3) 00:11:32.848 14107.348 - 14212.627: 96.5192% ( 3) 00:11:32.848 14212.627 - 14317.905: 96.5347% ( 2) 00:11:32.848 15054.856 - 15160.135: 96.5501% ( 2) 00:11:32.848 15160.135 - 15265.414: 96.5965% ( 6) 00:11:32.848 15265.414 - 15370.692: 96.6429% ( 6) 00:11:32.848 15370.692 - 15475.971: 96.8209% ( 23) 00:11:32.848 15475.971 - 15581.250: 96.9137% ( 12) 00:11:32.848 15581.250 - 15686.529: 96.9678% ( 7) 00:11:32.848 15686.529 - 15791.807: 97.0065% ( 5) 00:11:32.848 15791.807 - 15897.086: 97.0374% ( 4) 00:11:32.848 16423.480 - 16528.758: 97.0761% ( 5) 00:11:32.848 16528.758 - 16634.037: 97.1535% ( 10) 00:11:32.848 16634.037 - 16739.316: 97.2695% ( 15) 00:11:32.848 16739.316 - 16844.594: 97.3623% ( 12) 00:11:32.848 16844.594 - 16949.873: 97.3855% ( 3) 00:11:32.848 16949.873 - 17055.152: 97.4397% ( 7) 00:11:32.848 17055.152 - 17160.431: 97.4938% ( 7) 00:11:32.848 17160.431 - 17265.709: 97.5325% ( 5) 00:11:32.848 17265.709 - 17370.988: 97.5866% ( 7) 00:11:32.848 17370.988 - 17476.267: 97.6330% ( 6) 00:11:32.848 17476.267 - 17581.545: 97.6872% ( 7) 00:11:32.848 17581.545 - 17686.824: 97.7413% ( 7) 00:11:32.848 17686.824 - 17792.103: 97.7800% ( 5) 00:11:32.848 17792.103 - 17897.382: 97.8110% ( 4) 00:11:32.848 17897.382 - 18002.660: 97.8574% ( 6) 00:11:32.848 18002.660 - 18107.939: 97.9115% ( 7) 00:11:32.848 18107.939 - 18213.218: 97.9811% ( 9) 00:11:32.848 18213.218 - 18318.496: 98.0198% ( 5) 00:11:32.848 19160.726 - 19266.005: 98.0353% ( 2) 00:11:32.848 19266.005 - 19371.284: 98.0585% ( 3) 00:11:32.848 19371.284 - 19476.562: 98.0972% ( 5) 00:11:32.848 19476.562 - 19581.841: 98.1358% ( 5) 00:11:32.848 19581.841 - 19687.120: 98.2519% ( 15) 00:11:32.848 19687.120 - 19792.398: 98.3601% ( 14) 00:11:32.848 19792.398 - 19897.677: 98.3834% ( 3) 00:11:32.848 19897.677 - 20002.956: 98.4220% ( 5) 00:11:32.848 20002.956 - 20108.235: 98.4530% ( 4) 00:11:32.848 20108.235 - 20213.513: 98.4916% ( 5) 00:11:32.848 20213.513 - 20318.792: 98.5226% ( 4) 00:11:32.848 20318.792 - 20424.071: 98.5767% ( 7) 00:11:32.848 20424.071 - 20529.349: 98.6386% ( 8) 00:11:32.848 20529.349 - 20634.628: 98.8475% ( 27) 00:11:32.848 20634.628 - 20739.907: 98.8861% ( 5) 00:11:32.848 20739.907 - 20845.186: 98.9171% ( 4) 00:11:32.848 20845.186 - 20950.464: 98.9325% ( 2) 00:11:32.848 20950.464 - 21055.743: 98.9480% ( 2) 00:11:32.848 21055.743 - 21161.022: 98.9635% ( 2) 00:11:32.848 21161.022 - 21266.300: 98.9712% ( 1) 00:11:32.848 21266.300 - 21371.579: 98.9867% ( 2) 00:11:32.848 21371.579 - 21476.858: 99.0022% ( 2) 00:11:32.848 21476.858 - 21582.137: 99.0099% ( 1) 00:11:32.849 27372.466 - 27583.023: 99.0408% ( 4) 00:11:32.849 27583.023 - 27793.581: 99.1105% ( 9) 00:11:32.849 27793.581 - 28004.138: 99.1723% ( 8) 00:11:32.849 28004.138 - 28214.696: 99.2342% ( 8) 00:11:32.849 28214.696 - 28425.253: 99.2961% ( 8) 00:11:32.849 28425.253 - 28635.810: 99.3502% ( 7) 00:11:32.849 28635.810 - 28846.368: 99.4121% ( 8) 00:11:32.849 28846.368 - 29056.925: 99.4663% ( 7) 00:11:32.849 29056.925 - 29267.483: 99.5050% ( 5) 00:11:32.849 34741.976 - 34952.533: 99.5127% ( 1) 00:11:32.849 34952.533 - 35163.091: 99.5746% ( 8) 00:11:32.849 35163.091 - 35373.648: 99.6287% ( 7) 00:11:32.849 35373.648 - 35584.206: 99.6829% ( 7) 00:11:32.849 35584.206 - 35794.763: 99.7447% ( 8) 00:11:32.849 35794.763 - 36005.320: 99.8066% ( 8) 00:11:32.849 36005.320 - 36215.878: 99.8685% ( 8) 00:11:32.849 36215.878 - 36426.435: 99.9304% ( 8) 00:11:32.849 36426.435 - 36636.993: 99.9923% ( 8) 00:11:32.849 36636.993 - 36847.550: 100.0000% ( 1) 00:11:32.849 00:11:32.849 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:32.849 ============================================================================== 00:11:32.849 Range in us Cumulative IO count 00:11:32.849 7474.789 - 7527.428: 0.0077% ( 1) 00:11:32.849 7580.067 - 7632.707: 0.0308% ( 3) 00:11:32.849 7632.707 - 7685.346: 0.1155% ( 11) 00:11:32.849 7685.346 - 7737.986: 0.2386% ( 16) 00:11:32.849 7737.986 - 7790.625: 0.3695% ( 17) 00:11:32.849 7790.625 - 7843.264: 0.6235% ( 33) 00:11:32.849 7843.264 - 7895.904: 0.7620% ( 18) 00:11:32.849 7895.904 - 7948.543: 0.8698% ( 14) 00:11:32.849 7948.543 - 8001.182: 0.9467% ( 10) 00:11:32.849 8001.182 - 8053.822: 1.0776% ( 17) 00:11:32.849 8053.822 - 8106.461: 1.2084% ( 17) 00:11:32.849 8106.461 - 8159.100: 1.3855% ( 23) 00:11:32.849 8159.100 - 8211.740: 1.6857% ( 39) 00:11:32.849 8211.740 - 8264.379: 2.1244% ( 57) 00:11:32.849 8264.379 - 8317.018: 2.3861% ( 34) 00:11:32.849 8317.018 - 8369.658: 2.7094% ( 42) 00:11:32.849 8369.658 - 8422.297: 2.9788% ( 35) 00:11:32.849 8422.297 - 8474.937: 3.4483% ( 61) 00:11:32.849 8474.937 - 8527.576: 4.0563% ( 79) 00:11:32.849 8527.576 - 8580.215: 4.9184% ( 112) 00:11:32.849 8580.215 - 8632.855: 5.7959% ( 114) 00:11:32.849 8632.855 - 8685.494: 6.9196% ( 146) 00:11:32.849 8685.494 - 8738.133: 8.1743% ( 163) 00:11:32.849 8738.133 - 8790.773: 9.8060% ( 212) 00:11:32.849 8790.773 - 8843.412: 11.4455% ( 213) 00:11:32.849 8843.412 - 8896.051: 13.6161% ( 282) 00:11:32.849 8896.051 - 8948.691: 16.1407% ( 328) 00:11:32.849 8948.691 - 9001.330: 18.8655% ( 354) 00:11:32.849 9001.330 - 9053.969: 21.9289% ( 398) 00:11:32.849 9053.969 - 9106.609: 25.1078% ( 413) 00:11:32.849 9106.609 - 9159.248: 28.3790% ( 425) 00:11:32.849 9159.248 - 9211.888: 32.0197% ( 473) 00:11:32.849 9211.888 - 9264.527: 36.2685% ( 552) 00:11:32.849 9264.527 - 9317.166: 41.1715% ( 637) 00:11:32.849 9317.166 - 9369.806: 46.7134% ( 720) 00:11:32.849 9369.806 - 9422.445: 52.6170% ( 767) 00:11:32.849 9422.445 - 9475.084: 57.5893% ( 646) 00:11:32.849 9475.084 - 9527.724: 61.9150% ( 562) 00:11:32.849 9527.724 - 9580.363: 65.7482% ( 498) 00:11:32.849 9580.363 - 9633.002: 69.1502% ( 442) 00:11:32.849 9633.002 - 9685.642: 71.9982% ( 370) 00:11:32.849 9685.642 - 9738.281: 74.7229% ( 354) 00:11:32.849 9738.281 - 9790.920: 77.4323% ( 352) 00:11:32.849 9790.920 - 9843.560: 79.8722% ( 317) 00:11:32.849 9843.560 - 9896.199: 82.2660% ( 311) 00:11:32.849 9896.199 - 9948.839: 84.2903% ( 263) 00:11:32.849 9948.839 - 10001.478: 85.9914% ( 221) 00:11:32.849 10001.478 - 10054.117: 87.4769% ( 193) 00:11:32.849 10054.117 - 10106.757: 88.7777% ( 169) 00:11:32.849 10106.757 - 10159.396: 89.4320% ( 85) 00:11:32.849 10159.396 - 10212.035: 89.9092% ( 62) 00:11:32.849 10212.035 - 10264.675: 90.4172% ( 66) 00:11:32.849 10264.675 - 10317.314: 90.7943% ( 49) 00:11:32.849 10317.314 - 10369.953: 91.1407% ( 45) 00:11:32.849 10369.953 - 10422.593: 91.5256% ( 50) 00:11:32.849 10422.593 - 10475.232: 91.7642% ( 31) 00:11:32.849 10475.232 - 10527.871: 91.9720% ( 27) 00:11:32.849 10527.871 - 10580.511: 92.0643% ( 12) 00:11:32.849 10580.511 - 10633.150: 92.1567% ( 12) 00:11:32.849 10633.150 - 10685.790: 92.2183% ( 8) 00:11:32.849 10685.790 - 10738.429: 92.2568% ( 5) 00:11:32.849 10738.429 - 10791.068: 92.3414% ( 11) 00:11:32.849 10791.068 - 10843.708: 92.4338% ( 12) 00:11:32.849 10843.708 - 10896.347: 92.5185% ( 11) 00:11:32.849 10896.347 - 10948.986: 92.5647% ( 6) 00:11:32.849 10948.986 - 11001.626: 92.6724% ( 14) 00:11:32.849 11001.626 - 11054.265: 92.8494% ( 23) 00:11:32.849 11054.265 - 11106.904: 92.9726% ( 16) 00:11:32.849 11106.904 - 11159.544: 93.0265% ( 7) 00:11:32.849 11159.544 - 11212.183: 93.0419% ( 2) 00:11:32.849 11212.183 - 11264.822: 93.0573% ( 2) 00:11:32.849 11264.822 - 11317.462: 93.0804% ( 3) 00:11:32.849 11317.462 - 11370.101: 93.0958% ( 2) 00:11:32.849 11370.101 - 11422.741: 93.1034% ( 1) 00:11:32.849 11528.019 - 11580.659: 93.1111% ( 1) 00:11:32.849 11685.937 - 11738.577: 93.1342% ( 3) 00:11:32.849 11738.577 - 11791.216: 93.2035% ( 9) 00:11:32.849 11791.216 - 11843.855: 93.2651% ( 8) 00:11:32.849 11843.855 - 11896.495: 93.3190% ( 7) 00:11:32.849 11896.495 - 11949.134: 93.4267% ( 14) 00:11:32.849 11949.134 - 12001.773: 93.4806% ( 7) 00:11:32.849 12001.773 - 12054.413: 93.5499% ( 9) 00:11:32.849 12054.413 - 12107.052: 93.6345% ( 11) 00:11:32.849 12107.052 - 12159.692: 93.7038% ( 9) 00:11:32.849 12159.692 - 12212.331: 93.7500% ( 6) 00:11:32.849 12212.331 - 12264.970: 93.7885% ( 5) 00:11:32.849 12264.970 - 12317.610: 93.8732% ( 11) 00:11:32.849 12317.610 - 12370.249: 94.0117% ( 18) 00:11:32.849 12370.249 - 12422.888: 94.1195% ( 14) 00:11:32.849 12422.888 - 12475.528: 94.2041% ( 11) 00:11:32.849 12475.528 - 12528.167: 94.2426% ( 5) 00:11:32.849 12528.167 - 12580.806: 94.3350% ( 12) 00:11:32.849 12580.806 - 12633.446: 94.3812% ( 6) 00:11:32.849 12633.446 - 12686.085: 94.4735% ( 12) 00:11:32.849 12686.085 - 12738.724: 94.5813% ( 14) 00:11:32.849 12738.724 - 12791.364: 94.7583% ( 23) 00:11:32.849 12791.364 - 12844.003: 94.9661% ( 27) 00:11:32.849 12844.003 - 12896.643: 95.1817% ( 28) 00:11:32.849 12896.643 - 12949.282: 95.4126% ( 30) 00:11:32.849 12949.282 - 13001.921: 95.5896% ( 23) 00:11:32.849 13001.921 - 13054.561: 95.7050% ( 15) 00:11:32.849 13054.561 - 13107.200: 95.8359% ( 17) 00:11:32.849 13107.200 - 13159.839: 95.8744% ( 5) 00:11:32.849 13159.839 - 13212.479: 95.9206% ( 6) 00:11:32.849 13212.479 - 13265.118: 95.9591% ( 5) 00:11:32.849 13265.118 - 13317.757: 96.0745% ( 15) 00:11:32.849 13317.757 - 13370.397: 96.1900% ( 15) 00:11:32.849 13370.397 - 13423.036: 96.2900% ( 13) 00:11:32.849 13423.036 - 13475.676: 96.3670% ( 10) 00:11:32.849 13475.676 - 13580.954: 96.4286% ( 8) 00:11:32.849 13580.954 - 13686.233: 96.4517% ( 3) 00:11:32.849 13686.233 - 13791.512: 96.4748% ( 3) 00:11:32.849 13791.512 - 13896.790: 96.5055% ( 4) 00:11:32.849 13896.790 - 14002.069: 96.5209% ( 2) 00:11:32.849 14002.069 - 14107.348: 96.5440% ( 3) 00:11:32.849 14107.348 - 14212.627: 96.5517% ( 1) 00:11:32.849 15370.692 - 15475.971: 96.5748% ( 3) 00:11:32.849 15475.971 - 15581.250: 96.6287% ( 7) 00:11:32.849 15581.250 - 15686.529: 96.6749% ( 6) 00:11:32.849 15686.529 - 15791.807: 96.8750% ( 26) 00:11:32.849 15791.807 - 15897.086: 97.1136% ( 31) 00:11:32.849 15897.086 - 16002.365: 97.2829% ( 22) 00:11:32.849 16002.365 - 16107.643: 97.3753% ( 12) 00:11:32.849 16107.643 - 16212.922: 97.4061% ( 4) 00:11:32.849 16212.922 - 16318.201: 97.4292% ( 3) 00:11:32.849 16318.201 - 16423.480: 97.4523% ( 3) 00:11:32.849 16423.480 - 16528.758: 97.4754% ( 3) 00:11:32.849 16528.758 - 16634.037: 97.4908% ( 2) 00:11:32.849 16634.037 - 16739.316: 97.5062% ( 2) 00:11:32.849 16739.316 - 16844.594: 97.5292% ( 3) 00:11:32.849 16844.594 - 16949.873: 97.5369% ( 1) 00:11:32.849 18002.660 - 18107.939: 97.5446% ( 1) 00:11:32.849 18107.939 - 18213.218: 97.5754% ( 4) 00:11:32.849 18213.218 - 18318.496: 97.5985% ( 3) 00:11:32.849 18318.496 - 18423.775: 97.6216% ( 3) 00:11:32.849 18423.775 - 18529.054: 97.6524% ( 4) 00:11:32.849 18529.054 - 18634.333: 97.6832% ( 4) 00:11:32.849 18634.333 - 18739.611: 97.7063% ( 3) 00:11:32.849 18739.611 - 18844.890: 97.7371% ( 4) 00:11:32.849 18844.890 - 18950.169: 97.7909% ( 7) 00:11:32.849 18950.169 - 19055.447: 97.9372% ( 19) 00:11:32.849 19055.447 - 19160.726: 98.0603% ( 16) 00:11:32.849 19160.726 - 19266.005: 98.2143% ( 20) 00:11:32.849 19266.005 - 19371.284: 98.4298% ( 28) 00:11:32.849 19371.284 - 19476.562: 98.5068% ( 10) 00:11:32.849 19476.562 - 19581.841: 98.5837% ( 10) 00:11:32.849 19581.841 - 19687.120: 98.6222% ( 5) 00:11:32.849 19687.120 - 19792.398: 98.6761% ( 7) 00:11:32.849 19792.398 - 19897.677: 98.7300% ( 7) 00:11:32.849 19897.677 - 20002.956: 98.7916% ( 8) 00:11:32.849 20002.956 - 20108.235: 98.8685% ( 10) 00:11:32.849 20108.235 - 20213.513: 98.9301% ( 8) 00:11:32.849 20213.513 - 20318.792: 99.0071% ( 10) 00:11:32.849 20318.792 - 20424.071: 99.0610% ( 7) 00:11:32.849 20424.071 - 20529.349: 99.1379% ( 10) 00:11:32.849 20529.349 - 20634.628: 99.3765% ( 31) 00:11:32.849 20634.628 - 20739.907: 99.4150% ( 5) 00:11:32.849 20739.907 - 20845.186: 99.4304% ( 2) 00:11:32.849 20845.186 - 20950.464: 99.4458% ( 2) 00:11:32.849 20950.464 - 21055.743: 99.4612% ( 2) 00:11:32.849 21055.743 - 21161.022: 99.4689% ( 1) 00:11:32.849 21161.022 - 21266.300: 99.4920% ( 3) 00:11:32.849 21266.300 - 21371.579: 99.5074% ( 2) 00:11:32.849 26951.351 - 27161.908: 99.5382% ( 4) 00:11:32.849 27161.908 - 27372.466: 99.5998% ( 8) 00:11:32.849 27372.466 - 27583.023: 99.6613% ( 8) 00:11:32.849 27583.023 - 27793.581: 99.7229% ( 8) 00:11:32.849 27793.581 - 28004.138: 99.7768% ( 7) 00:11:32.849 28004.138 - 28214.696: 99.8461% ( 9) 00:11:32.849 28214.696 - 28425.253: 99.8999% ( 7) 00:11:32.849 28425.253 - 28635.810: 99.9615% ( 8) 00:11:32.849 28635.810 - 28846.368: 100.0000% ( 5) 00:11:32.849 00:11:32.849 12:21:55 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:32.849 00:11:32.850 real 0m2.694s 00:11:32.850 user 0m2.278s 00:11:32.850 sys 0m0.302s 00:11:32.850 12:21:55 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.850 ************************************ 00:11:32.850 END TEST nvme_perf 00:11:32.850 ************************************ 00:11:32.850 12:21:55 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:32.850 12:21:55 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:32.850 12:21:55 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:32.850 12:21:55 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.850 12:21:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:32.850 ************************************ 00:11:32.850 START TEST nvme_hello_world 00:11:32.850 ************************************ 00:11:32.850 12:21:56 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:33.109 Initializing NVMe Controllers 00:11:33.109 Attached to 0000:00:10.0 00:11:33.109 Namespace ID: 1 size: 6GB 00:11:33.109 Attached to 0000:00:11.0 00:11:33.109 Namespace ID: 1 size: 5GB 00:11:33.109 Attached to 0000:00:13.0 00:11:33.109 Namespace ID: 1 size: 1GB 00:11:33.109 Attached to 0000:00:12.0 00:11:33.109 Namespace ID: 1 size: 4GB 00:11:33.109 Namespace ID: 2 size: 4GB 00:11:33.109 Namespace ID: 3 size: 4GB 00:11:33.109 Initialization complete. 00:11:33.109 INFO: using host memory buffer for IO 00:11:33.109 Hello world! 00:11:33.109 INFO: using host memory buffer for IO 00:11:33.109 Hello world! 00:11:33.109 INFO: using host memory buffer for IO 00:11:33.109 Hello world! 00:11:33.109 INFO: using host memory buffer for IO 00:11:33.109 Hello world! 00:11:33.109 INFO: using host memory buffer for IO 00:11:33.109 Hello world! 00:11:33.109 INFO: using host memory buffer for IO 00:11:33.109 Hello world! 00:11:33.109 ************************************ 00:11:33.109 END TEST nvme_hello_world 00:11:33.109 ************************************ 00:11:33.109 00:11:33.109 real 0m0.281s 00:11:33.109 user 0m0.103s 00:11:33.109 sys 0m0.138s 00:11:33.109 12:21:56 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.109 12:21:56 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:33.109 12:21:56 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:33.109 12:21:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:33.109 12:21:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.109 12:21:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:33.109 ************************************ 00:11:33.109 START TEST nvme_sgl 00:11:33.109 ************************************ 00:11:33.109 12:21:56 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:33.367 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:33.367 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:33.367 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:33.367 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:33.367 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:33.367 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:33.367 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:33.367 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:33.626 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:33.626 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:33.626 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:33.626 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:33.626 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:33.626 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:33.626 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:33.626 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:33.627 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:33.627 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:33.627 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:33.627 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:33.627 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:33.627 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:33.627 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:33.627 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:33.627 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:33.627 NVMe Readv/Writev Request test 00:11:33.627 Attached to 0000:00:10.0 00:11:33.627 Attached to 0000:00:11.0 00:11:33.627 Attached to 0000:00:13.0 00:11:33.627 Attached to 0000:00:12.0 00:11:33.627 0000:00:10.0: build_io_request_2 test passed 00:11:33.627 0000:00:10.0: build_io_request_4 test passed 00:11:33.627 0000:00:10.0: build_io_request_5 test passed 00:11:33.627 0000:00:10.0: build_io_request_6 test passed 00:11:33.627 0000:00:10.0: build_io_request_7 test passed 00:11:33.627 0000:00:10.0: build_io_request_10 test passed 00:11:33.627 0000:00:11.0: build_io_request_2 test passed 00:11:33.627 0000:00:11.0: build_io_request_4 test passed 00:11:33.627 0000:00:11.0: build_io_request_5 test passed 00:11:33.627 0000:00:11.0: build_io_request_6 test passed 00:11:33.627 0000:00:11.0: build_io_request_7 test passed 00:11:33.627 0000:00:11.0: build_io_request_10 test passed 00:11:33.627 Cleaning up... 00:11:33.627 00:11:33.627 real 0m0.370s 00:11:33.627 user 0m0.158s 00:11:33.627 sys 0m0.162s 00:11:33.627 ************************************ 00:11:33.627 END TEST nvme_sgl 00:11:33.627 ************************************ 00:11:33.627 12:21:56 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.627 12:21:56 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:33.627 12:21:56 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:33.627 12:21:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:33.627 12:21:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.627 12:21:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:33.627 ************************************ 00:11:33.627 START TEST nvme_e2edp 00:11:33.627 ************************************ 00:11:33.627 12:21:56 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:33.886 NVMe Write/Read with End-to-End data protection test 00:11:33.886 Attached to 0000:00:10.0 00:11:33.886 Attached to 0000:00:11.0 00:11:33.886 Attached to 0000:00:13.0 00:11:33.886 Attached to 0000:00:12.0 00:11:33.886 Cleaning up... 00:11:33.886 00:11:33.886 real 0m0.292s 00:11:33.886 user 0m0.107s 00:11:33.886 sys 0m0.142s 00:11:33.886 12:21:57 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.886 12:21:57 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:33.886 ************************************ 00:11:33.886 END TEST nvme_e2edp 00:11:33.886 ************************************ 00:11:33.886 12:21:57 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:33.886 12:21:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:33.886 12:21:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.886 12:21:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:33.886 ************************************ 00:11:33.886 START TEST nvme_reserve 00:11:33.886 ************************************ 00:11:33.886 12:21:57 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:34.144 ===================================================== 00:11:34.144 NVMe Controller at PCI bus 0, device 16, function 0 00:11:34.144 ===================================================== 00:11:34.144 Reservations: Not Supported 00:11:34.144 ===================================================== 00:11:34.144 NVMe Controller at PCI bus 0, device 17, function 0 00:11:34.144 ===================================================== 00:11:34.144 Reservations: Not Supported 00:11:34.144 ===================================================== 00:11:34.144 NVMe Controller at PCI bus 0, device 19, function 0 00:11:34.144 ===================================================== 00:11:34.144 Reservations: Not Supported 00:11:34.144 ===================================================== 00:11:34.144 NVMe Controller at PCI bus 0, device 18, function 0 00:11:34.144 ===================================================== 00:11:34.144 Reservations: Not Supported 00:11:34.144 Reservation test passed 00:11:34.144 00:11:34.144 real 0m0.268s 00:11:34.144 user 0m0.085s 00:11:34.144 sys 0m0.142s 00:11:34.144 12:21:57 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.144 ************************************ 00:11:34.144 END TEST nvme_reserve 00:11:34.144 ************************************ 00:11:34.144 12:21:57 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:34.403 12:21:57 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:34.403 12:21:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:34.403 12:21:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.403 12:21:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:34.403 ************************************ 00:11:34.403 START TEST nvme_err_injection 00:11:34.403 ************************************ 00:11:34.403 12:21:57 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:34.662 NVMe Error Injection test 00:11:34.662 Attached to 0000:00:10.0 00:11:34.662 Attached to 0000:00:11.0 00:11:34.662 Attached to 0000:00:13.0 00:11:34.663 Attached to 0000:00:12.0 00:11:34.663 0000:00:11.0: get features failed as expected 00:11:34.663 0000:00:13.0: get features failed as expected 00:11:34.663 0000:00:12.0: get features failed as expected 00:11:34.663 0000:00:10.0: get features failed as expected 00:11:34.663 0000:00:10.0: get features successfully as expected 00:11:34.663 0000:00:11.0: get features successfully as expected 00:11:34.663 0000:00:13.0: get features successfully as expected 00:11:34.663 0000:00:12.0: get features successfully as expected 00:11:34.663 0000:00:11.0: read failed as expected 00:11:34.663 0000:00:10.0: read failed as expected 00:11:34.663 0000:00:13.0: read failed as expected 00:11:34.663 0000:00:12.0: read failed as expected 00:11:34.663 0000:00:11.0: read successfully as expected 00:11:34.663 0000:00:10.0: read successfully as expected 00:11:34.663 0000:00:13.0: read successfully as expected 00:11:34.663 0000:00:12.0: read successfully as expected 00:11:34.663 Cleaning up... 00:11:34.663 00:11:34.663 real 0m0.298s 00:11:34.663 user 0m0.112s 00:11:34.663 sys 0m0.145s 00:11:34.663 12:21:57 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.663 ************************************ 00:11:34.663 END TEST nvme_err_injection 00:11:34.663 ************************************ 00:11:34.663 12:21:57 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:34.663 12:21:57 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:34.663 12:21:57 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:11:34.663 12:21:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.663 12:21:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:34.663 ************************************ 00:11:34.663 START TEST nvme_overhead 00:11:34.663 ************************************ 00:11:34.663 12:21:57 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:36.041 Initializing NVMe Controllers 00:11:36.041 Attached to 0000:00:10.0 00:11:36.041 Attached to 0000:00:11.0 00:11:36.041 Attached to 0000:00:13.0 00:11:36.041 Attached to 0000:00:12.0 00:11:36.041 Initialization complete. Launching workers. 00:11:36.041 submit (in ns) avg, min, max = 13200.2, 11845.0, 140412.9 00:11:36.041 complete (in ns) avg, min, max = 9207.7, 8469.1, 544269.1 00:11:36.041 00:11:36.041 Submit histogram 00:11:36.041 ================ 00:11:36.041 Range in us Cumulative Count 00:11:36.041 11.823 - 11.875: 0.0176% ( 1) 00:11:36.041 11.875 - 11.926: 0.0353% ( 1) 00:11:36.041 11.978 - 12.029: 0.0529% ( 1) 00:11:36.041 12.132 - 12.183: 0.0705% ( 1) 00:11:36.041 12.183 - 12.235: 0.1410% ( 4) 00:11:36.041 12.235 - 12.286: 0.2644% ( 7) 00:11:36.041 12.286 - 12.337: 0.4231% ( 9) 00:11:36.041 12.337 - 12.389: 0.9519% ( 30) 00:11:36.041 12.389 - 12.440: 2.1329% ( 67) 00:11:36.041 12.440 - 12.492: 5.5879% ( 196) 00:11:36.041 12.492 - 12.543: 10.4530% ( 276) 00:11:36.041 12.543 - 12.594: 16.9046% ( 366) 00:11:36.041 12.594 - 12.646: 23.2505% ( 360) 00:11:36.041 12.646 - 12.697: 30.6011% ( 417) 00:11:36.041 12.697 - 12.749: 37.9870% ( 419) 00:11:36.041 12.749 - 12.800: 46.1484% ( 463) 00:11:36.041 12.800 - 12.851: 54.0807% ( 450) 00:11:36.041 12.851 - 12.903: 62.1893% ( 460) 00:11:36.041 12.903 - 12.954: 69.7338% ( 428) 00:11:36.041 12.954 - 13.006: 75.7976% ( 344) 00:11:36.041 13.006 - 13.057: 80.5041% ( 267) 00:11:36.041 13.057 - 13.108: 84.3998% ( 221) 00:11:36.041 13.108 - 13.160: 87.0968% ( 153) 00:11:36.041 13.160 - 13.263: 90.5517% ( 196) 00:11:36.041 13.263 - 13.365: 92.6141% ( 117) 00:11:36.041 13.365 - 13.468: 93.7775% ( 66) 00:11:36.041 13.468 - 13.571: 94.3945% ( 35) 00:11:36.041 13.571 - 13.674: 94.6237% ( 13) 00:11:36.041 13.777 - 13.880: 94.6413% ( 1) 00:11:36.041 13.880 - 13.982: 94.6589% ( 1) 00:11:36.041 13.982 - 14.085: 94.6765% ( 1) 00:11:36.041 14.085 - 14.188: 94.7999% ( 7) 00:11:36.041 14.188 - 14.291: 94.8528% ( 3) 00:11:36.041 14.291 - 14.394: 94.9409% ( 5) 00:11:36.041 14.394 - 14.496: 94.9762% ( 2) 00:11:36.041 14.496 - 14.599: 94.9938% ( 1) 00:11:36.041 14.599 - 14.702: 95.0291% ( 2) 00:11:36.041 14.702 - 14.805: 95.0467% ( 1) 00:11:36.041 14.908 - 15.010: 95.0643% ( 1) 00:11:36.041 15.010 - 15.113: 95.0820% ( 1) 00:11:36.041 15.524 - 15.627: 95.0996% ( 1) 00:11:36.041 15.936 - 16.039: 95.1172% ( 1) 00:11:36.041 16.039 - 16.141: 95.1348% ( 1) 00:11:36.041 16.244 - 16.347: 95.1525% ( 1) 00:11:36.041 16.553 - 16.655: 95.2582% ( 6) 00:11:36.041 16.655 - 16.758: 95.2935% ( 2) 00:11:36.041 16.758 - 16.861: 95.3993% ( 6) 00:11:36.041 16.861 - 16.964: 95.5403% ( 8) 00:11:36.041 16.964 - 17.067: 95.6637% ( 7) 00:11:36.041 17.067 - 17.169: 95.8399% ( 10) 00:11:36.041 17.169 - 17.272: 96.1396% ( 17) 00:11:36.041 17.272 - 17.375: 96.2983% ( 9) 00:11:36.041 17.375 - 17.478: 96.3159% ( 1) 00:11:36.041 17.478 - 17.581: 96.5450% ( 13) 00:11:36.041 17.581 - 17.684: 96.8094% ( 15) 00:11:36.041 17.684 - 17.786: 96.9505% ( 8) 00:11:36.041 17.786 - 17.889: 97.0915% ( 8) 00:11:36.041 17.889 - 17.992: 97.2678% ( 10) 00:11:36.041 17.992 - 18.095: 97.4969% ( 13) 00:11:36.041 18.095 - 18.198: 97.6379% ( 8) 00:11:36.041 18.198 - 18.300: 97.8671% ( 13) 00:11:36.041 18.300 - 18.403: 97.9905% ( 7) 00:11:36.041 18.403 - 18.506: 98.1491% ( 9) 00:11:36.041 18.506 - 18.609: 98.2901% ( 8) 00:11:36.041 18.609 - 18.712: 98.3959% ( 6) 00:11:36.041 18.712 - 18.814: 98.4840% ( 5) 00:11:36.041 18.814 - 18.917: 98.5193% ( 2) 00:11:36.041 18.917 - 19.020: 98.5369% ( 1) 00:11:36.041 19.020 - 19.123: 98.5546% ( 1) 00:11:36.041 19.123 - 19.226: 98.6074% ( 3) 00:11:36.041 19.226 - 19.329: 98.6251% ( 1) 00:11:36.041 19.329 - 19.431: 98.6427% ( 1) 00:11:36.041 19.431 - 19.534: 98.6603% ( 1) 00:11:36.041 19.637 - 19.740: 98.6956% ( 2) 00:11:36.041 19.740 - 19.843: 98.7308% ( 2) 00:11:36.041 19.843 - 19.945: 98.7661% ( 2) 00:11:36.041 19.945 - 20.048: 98.8013% ( 2) 00:11:36.041 20.048 - 20.151: 98.8895% ( 5) 00:11:36.041 20.151 - 20.254: 98.9247% ( 2) 00:11:36.041 20.254 - 20.357: 98.9600% ( 2) 00:11:36.041 20.357 - 20.459: 98.9776% ( 1) 00:11:36.041 20.562 - 20.665: 98.9952% ( 1) 00:11:36.041 20.665 - 20.768: 99.0129% ( 1) 00:11:36.041 20.768 - 20.871: 99.0481% ( 2) 00:11:36.041 20.871 - 20.973: 99.0658% ( 1) 00:11:36.041 21.076 - 21.179: 99.0834% ( 1) 00:11:36.041 21.179 - 21.282: 99.1010% ( 1) 00:11:36.041 21.282 - 21.385: 99.1363% ( 2) 00:11:36.041 21.385 - 21.488: 99.1539% ( 1) 00:11:36.041 21.693 - 21.796: 99.1715% ( 1) 00:11:36.041 21.796 - 21.899: 99.1891% ( 1) 00:11:36.041 22.002 - 22.104: 99.2068% ( 1) 00:11:36.041 22.310 - 22.413: 99.2244% ( 1) 00:11:36.041 22.413 - 22.516: 99.2420% ( 1) 00:11:36.041 22.516 - 22.618: 99.2597% ( 1) 00:11:36.041 22.618 - 22.721: 99.2773% ( 1) 00:11:36.041 22.721 - 22.824: 99.3478% ( 4) 00:11:36.041 23.133 - 23.235: 99.3830% ( 2) 00:11:36.041 23.235 - 23.338: 99.4183% ( 2) 00:11:36.041 23.338 - 23.441: 99.4359% ( 1) 00:11:36.041 23.441 - 23.544: 99.4536% ( 1) 00:11:36.041 23.544 - 23.647: 99.4888% ( 2) 00:11:36.041 23.647 - 23.749: 99.5241% ( 2) 00:11:36.041 24.058 - 24.161: 99.5417% ( 1) 00:11:36.041 25.292 - 25.394: 99.5593% ( 1) 00:11:36.041 25.394 - 25.497: 99.5769% ( 1) 00:11:36.041 25.908 - 26.011: 99.5946% ( 1) 00:11:36.041 26.937 - 27.142: 99.6122% ( 1) 00:11:36.041 28.376 - 28.582: 99.6298% ( 1) 00:11:36.041 28.582 - 28.787: 99.6651% ( 2) 00:11:36.041 28.993 - 29.198: 99.7003% ( 2) 00:11:36.041 29.404 - 29.610: 99.7180% ( 1) 00:11:36.041 29.815 - 30.021: 99.7708% ( 3) 00:11:36.041 30.227 - 30.432: 99.7885% ( 1) 00:11:36.041 33.105 - 33.311: 99.8061% ( 1) 00:11:36.041 33.311 - 33.516: 99.8237% ( 1) 00:11:36.041 35.573 - 35.778: 99.8414% ( 1) 00:11:36.041 38.657 - 38.863: 99.8766% ( 2) 00:11:36.041 41.741 - 41.947: 99.8942% ( 1) 00:11:36.041 44.620 - 44.826: 99.9119% ( 1) 00:11:36.041 44.826 - 45.031: 99.9295% ( 1) 00:11:36.041 49.966 - 50.172: 99.9471% ( 1) 00:11:36.041 54.696 - 55.107: 99.9647% ( 1) 00:11:36.041 57.574 - 57.986: 99.9824% ( 1) 00:11:36.041 139.823 - 140.646: 100.0000% ( 1) 00:11:36.041 00:11:36.041 Complete histogram 00:11:36.041 ================== 00:11:36.041 Range in us Cumulative Count 00:11:36.041 8.431 - 8.482: 0.0176% ( 1) 00:11:36.041 8.482 - 8.533: 0.7051% ( 39) 00:11:36.041 8.533 - 8.585: 3.6489% ( 167) 00:11:36.041 8.585 - 8.636: 7.2096% ( 202) 00:11:36.041 8.636 - 8.688: 9.9066% ( 153) 00:11:36.041 8.688 - 8.739: 12.8503% ( 167) 00:11:36.041 8.739 - 8.790: 21.9637% ( 517) 00:11:36.041 8.790 - 8.842: 33.2099% ( 638) 00:11:36.041 8.842 - 8.893: 42.8521% ( 547) 00:11:36.041 8.893 - 8.945: 50.9783% ( 461) 00:11:36.041 8.945 - 8.996: 58.7696% ( 442) 00:11:36.041 8.996 - 9.047: 65.0097% ( 354) 00:11:36.041 9.047 - 9.099: 70.8796% ( 333) 00:11:36.041 9.099 - 9.150: 76.2736% ( 306) 00:11:36.041 9.150 - 9.202: 81.1740% ( 278) 00:11:36.041 9.202 - 9.253: 84.6995% ( 200) 00:11:36.041 9.253 - 9.304: 87.3259% ( 149) 00:11:36.041 9.304 - 9.356: 89.7761% ( 139) 00:11:36.041 9.356 - 9.407: 91.4155% ( 93) 00:11:36.041 9.407 - 9.459: 92.5613% ( 65) 00:11:36.041 9.459 - 9.510: 93.4074% ( 48) 00:11:36.041 9.510 - 9.561: 94.2535% ( 48) 00:11:36.041 9.561 - 9.613: 95.1525% ( 51) 00:11:36.041 9.613 - 9.664: 95.6813% ( 30) 00:11:36.041 9.664 - 9.716: 96.2101% ( 30) 00:11:36.041 9.716 - 9.767: 96.5627% ( 20) 00:11:36.041 9.767 - 9.818: 96.8623% ( 17) 00:11:36.041 9.818 - 9.870: 97.1091% ( 14) 00:11:36.041 9.870 - 9.921: 97.3559% ( 14) 00:11:36.041 9.921 - 9.973: 97.5851% ( 13) 00:11:36.041 9.973 - 10.024: 97.6908% ( 6) 00:11:36.041 10.024 - 10.076: 97.8495% ( 9) 00:11:36.041 10.076 - 10.127: 97.9552% ( 6) 00:11:36.041 10.127 - 10.178: 98.0434% ( 5) 00:11:36.041 10.178 - 10.230: 98.1315% ( 5) 00:11:36.041 10.230 - 10.281: 98.2373% ( 6) 00:11:36.041 10.281 - 10.333: 98.3078% ( 4) 00:11:36.041 10.333 - 10.384: 98.3254% ( 1) 00:11:36.041 10.384 - 10.435: 98.3607% ( 2) 00:11:36.042 10.435 - 10.487: 98.4488% ( 5) 00:11:36.042 10.487 - 10.538: 98.4840% ( 2) 00:11:36.042 10.538 - 10.590: 98.5017% ( 1) 00:11:36.042 10.590 - 10.641: 98.5369% ( 2) 00:11:36.042 10.641 - 10.692: 98.5546% ( 1) 00:11:36.042 10.692 - 10.744: 98.5722% ( 1) 00:11:36.042 10.847 - 10.898: 98.5898% ( 1) 00:11:36.042 11.309 - 11.361: 98.6074% ( 1) 00:11:36.042 11.463 - 11.515: 98.6251% ( 1) 00:11:36.042 11.566 - 11.618: 98.6603% ( 2) 00:11:36.042 11.926 - 11.978: 98.6779% ( 1) 00:11:36.042 12.543 - 12.594: 98.6956% ( 1) 00:11:36.042 13.674 - 13.777: 98.7132% ( 1) 00:11:36.042 14.085 - 14.188: 98.7308% ( 1) 00:11:36.042 14.188 - 14.291: 98.8542% ( 7) 00:11:36.042 14.291 - 14.394: 98.9071% ( 3) 00:11:36.042 14.394 - 14.496: 98.9424% ( 2) 00:11:36.042 14.496 - 14.599: 98.9776% ( 2) 00:11:36.042 14.599 - 14.702: 99.0481% ( 4) 00:11:36.042 14.702 - 14.805: 99.1186% ( 4) 00:11:36.042 14.805 - 14.908: 99.1891% ( 4) 00:11:36.042 14.908 - 15.010: 99.2244% ( 2) 00:11:36.042 15.010 - 15.113: 99.2420% ( 1) 00:11:36.042 15.113 - 15.216: 99.2597% ( 1) 00:11:36.042 15.319 - 15.422: 99.2773% ( 1) 00:11:36.042 15.524 - 15.627: 99.2949% ( 1) 00:11:36.042 15.627 - 15.730: 99.3302% ( 2) 00:11:36.042 15.833 - 15.936: 99.3478% ( 1) 00:11:36.042 16.039 - 16.141: 99.4007% ( 3) 00:11:36.042 16.141 - 16.244: 99.4183% ( 1) 00:11:36.042 16.244 - 16.347: 99.4359% ( 1) 00:11:36.042 16.347 - 16.450: 99.4536% ( 1) 00:11:36.042 16.450 - 16.553: 99.4712% ( 1) 00:11:36.042 16.758 - 16.861: 99.4888% ( 1) 00:11:36.042 17.786 - 17.889: 99.5064% ( 1) 00:11:36.042 17.889 - 17.992: 99.5241% ( 1) 00:11:36.042 18.300 - 18.403: 99.5417% ( 1) 00:11:36.042 18.506 - 18.609: 99.5593% ( 1) 00:11:36.042 18.609 - 18.712: 99.5769% ( 1) 00:11:36.042 18.712 - 18.814: 99.6298% ( 3) 00:11:36.042 19.329 - 19.431: 99.6651% ( 2) 00:11:36.042 20.048 - 20.151: 99.6827% ( 1) 00:11:36.042 20.357 - 20.459: 99.7003% ( 1) 00:11:36.042 20.768 - 20.871: 99.7180% ( 1) 00:11:36.042 20.871 - 20.973: 99.7532% ( 2) 00:11:36.042 21.796 - 21.899: 99.7708% ( 1) 00:11:36.042 23.235 - 23.338: 99.7885% ( 1) 00:11:36.042 23.544 - 23.647: 99.8061% ( 1) 00:11:36.042 24.058 - 24.161: 99.8237% ( 1) 00:11:36.042 24.778 - 24.880: 99.8414% ( 1) 00:11:36.042 25.086 - 25.189: 99.8590% ( 1) 00:11:36.042 25.189 - 25.292: 99.8766% ( 1) 00:11:36.042 27.348 - 27.553: 99.8942% ( 1) 00:11:36.042 27.759 - 27.965: 99.9119% ( 1) 00:11:36.042 30.432 - 30.638: 99.9295% ( 1) 00:11:36.042 31.049 - 31.255: 99.9471% ( 1) 00:11:36.042 38.246 - 38.451: 99.9647% ( 1) 00:11:36.042 63.743 - 64.154: 99.9824% ( 1) 00:11:36.042 542.843 - 546.133: 100.0000% ( 1) 00:11:36.042 00:11:36.042 00:11:36.042 real 0m1.298s 00:11:36.042 user 0m1.101s 00:11:36.042 sys 0m0.147s 00:11:36.042 12:21:59 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.042 12:21:59 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:36.042 ************************************ 00:11:36.042 END TEST nvme_overhead 00:11:36.042 ************************************ 00:11:36.042 12:21:59 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:36.042 12:21:59 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:11:36.042 12:21:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.042 12:21:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:36.042 ************************************ 00:11:36.042 START TEST nvme_arbitration 00:11:36.042 ************************************ 00:11:36.042 12:21:59 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:39.333 Initializing NVMe Controllers 00:11:39.333 Attached to 0000:00:10.0 00:11:39.333 Attached to 0000:00:11.0 00:11:39.333 Attached to 0000:00:13.0 00:11:39.333 Attached to 0000:00:12.0 00:11:39.333 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:39.333 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:39.333 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:39.333 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:39.333 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:39.333 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:39.333 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:39.333 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:39.333 Initialization complete. Launching workers. 00:11:39.333 Starting thread on core 1 with urgent priority queue 00:11:39.333 Starting thread on core 2 with urgent priority queue 00:11:39.333 Starting thread on core 3 with urgent priority queue 00:11:39.333 Starting thread on core 0 with urgent priority queue 00:11:39.333 QEMU NVMe Ctrl (12340 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:11:39.333 QEMU NVMe Ctrl (12342 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:11:39.333 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:11:39.333 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:11:39.333 QEMU NVMe Ctrl (12343 ) core 2: 576.00 IO/s 173.61 secs/100000 ios 00:11:39.333 QEMU NVMe Ctrl (12342 ) core 3: 576.00 IO/s 173.61 secs/100000 ios 00:11:39.333 ======================================================== 00:11:39.333 00:11:39.592 ************************************ 00:11:39.592 END TEST nvme_arbitration 00:11:39.592 ************************************ 00:11:39.592 00:11:39.592 real 0m3.415s 00:11:39.592 user 0m9.353s 00:11:39.592 sys 0m0.167s 00:11:39.592 12:22:02 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.592 12:22:02 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:39.592 12:22:02 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:39.592 12:22:02 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:39.593 12:22:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.593 12:22:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.593 ************************************ 00:11:39.593 START TEST nvme_single_aen 00:11:39.593 ************************************ 00:11:39.593 12:22:02 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:39.906 Asynchronous Event Request test 00:11:39.906 Attached to 0000:00:10.0 00:11:39.906 Attached to 0000:00:11.0 00:11:39.906 Attached to 0000:00:13.0 00:11:39.906 Attached to 0000:00:12.0 00:11:39.906 Reset controller to setup AER completions for this process 00:11:39.906 Registering asynchronous event callbacks... 00:11:39.906 Getting orig temperature thresholds of all controllers 00:11:39.906 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:39.906 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:39.906 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:39.906 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:39.906 Setting all controllers temperature threshold low to trigger AER 00:11:39.906 Waiting for all controllers temperature threshold to be set lower 00:11:39.906 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:39.906 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:39.906 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:39.906 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:39.906 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:39.906 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:39.906 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:39.906 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:39.906 Waiting for all controllers to trigger AER and reset threshold 00:11:39.906 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:39.906 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:39.906 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:39.906 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:39.906 Cleaning up... 00:11:39.906 ************************************ 00:11:39.906 END TEST nvme_single_aen 00:11:39.906 ************************************ 00:11:39.906 00:11:39.906 real 0m0.288s 00:11:39.906 user 0m0.094s 00:11:39.906 sys 0m0.152s 00:11:39.906 12:22:03 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.906 12:22:03 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:39.906 12:22:03 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:39.906 12:22:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:39.906 12:22:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.906 12:22:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.906 ************************************ 00:11:39.906 START TEST nvme_doorbell_aers 00:11:39.906 ************************************ 00:11:39.906 12:22:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:11:39.906 12:22:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:39.906 12:22:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:39.906 12:22:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:39.906 12:22:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:39.906 12:22:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:39.906 12:22:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:11:39.906 12:22:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:39.906 12:22:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:39.906 12:22:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:40.165 12:22:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:40.165 12:22:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:40.165 12:22:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:40.165 12:22:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:40.424 [2024-10-07 12:22:03.489872] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:11:50.397 Executing: test_write_invalid_db 00:11:50.397 Waiting for AER completion... 00:11:50.397 Failure: test_write_invalid_db 00:11:50.397 00:11:50.397 Executing: test_invalid_db_write_overflow_sq 00:11:50.397 Waiting for AER completion... 00:11:50.397 Failure: test_invalid_db_write_overflow_sq 00:11:50.397 00:11:50.397 Executing: test_invalid_db_write_overflow_cq 00:11:50.397 Waiting for AER completion... 00:11:50.397 Failure: test_invalid_db_write_overflow_cq 00:11:50.397 00:11:50.397 12:22:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:50.397 12:22:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:50.397 [2024-10-07 12:22:13.555624] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:00.364 Executing: test_write_invalid_db 00:12:00.364 Waiting for AER completion... 00:12:00.364 Failure: test_write_invalid_db 00:12:00.364 00:12:00.364 Executing: test_invalid_db_write_overflow_sq 00:12:00.364 Waiting for AER completion... 00:12:00.364 Failure: test_invalid_db_write_overflow_sq 00:12:00.364 00:12:00.364 Executing: test_invalid_db_write_overflow_cq 00:12:00.364 Waiting for AER completion... 00:12:00.364 Failure: test_invalid_db_write_overflow_cq 00:12:00.364 00:12:00.364 12:22:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:00.364 12:22:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:00.364 [2024-10-07 12:22:23.607432] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:10.331 Executing: test_write_invalid_db 00:12:10.331 Waiting for AER completion... 00:12:10.331 Failure: test_write_invalid_db 00:12:10.331 00:12:10.331 Executing: test_invalid_db_write_overflow_sq 00:12:10.331 Waiting for AER completion... 00:12:10.331 Failure: test_invalid_db_write_overflow_sq 00:12:10.331 00:12:10.331 Executing: test_invalid_db_write_overflow_cq 00:12:10.331 Waiting for AER completion... 00:12:10.331 Failure: test_invalid_db_write_overflow_cq 00:12:10.331 00:12:10.331 12:22:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:10.331 12:22:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:10.589 [2024-10-07 12:22:33.659076] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 Executing: test_write_invalid_db 00:12:20.626 Waiting for AER completion... 00:12:20.626 Failure: test_write_invalid_db 00:12:20.626 00:12:20.626 Executing: test_invalid_db_write_overflow_sq 00:12:20.626 Waiting for AER completion... 00:12:20.626 Failure: test_invalid_db_write_overflow_sq 00:12:20.626 00:12:20.626 Executing: test_invalid_db_write_overflow_cq 00:12:20.626 Waiting for AER completion... 00:12:20.626 Failure: test_invalid_db_write_overflow_cq 00:12:20.626 00:12:20.626 00:12:20.626 real 0m40.330s 00:12:20.626 user 0m28.589s 00:12:20.626 sys 0m11.328s 00:12:20.626 12:22:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.626 12:22:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:20.626 ************************************ 00:12:20.626 END TEST nvme_doorbell_aers 00:12:20.626 ************************************ 00:12:20.626 12:22:43 nvme -- nvme/nvme.sh@97 -- # uname 00:12:20.626 12:22:43 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:20.626 12:22:43 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:20.626 12:22:43 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:12:20.626 12:22:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.626 12:22:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.626 ************************************ 00:12:20.626 START TEST nvme_multi_aen 00:12:20.626 ************************************ 00:12:20.626 12:22:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:20.626 [2024-10-07 12:22:43.740522] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.740620] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.740649] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.742611] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.742655] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.742679] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.744353] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.744532] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.744644] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.746112] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.746277] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 [2024-10-07 12:22:43.746386] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64908) is not found. Dropping the request. 00:12:20.626 Child process pid: 65423 00:12:20.885 [Child] Asynchronous Event Request test 00:12:20.885 [Child] Attached to 0000:00:10.0 00:12:20.885 [Child] Attached to 0000:00:11.0 00:12:20.885 [Child] Attached to 0000:00:13.0 00:12:20.885 [Child] Attached to 0000:00:12.0 00:12:20.885 [Child] Registering asynchronous event callbacks... 00:12:20.885 [Child] Getting orig temperature thresholds of all controllers 00:12:20.885 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.885 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.885 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.885 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.885 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:20.885 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.885 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.885 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.885 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.885 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.885 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.885 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.885 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.885 [Child] Cleaning up... 00:12:20.885 Asynchronous Event Request test 00:12:20.885 Attached to 0000:00:10.0 00:12:20.885 Attached to 0000:00:11.0 00:12:20.885 Attached to 0000:00:13.0 00:12:20.885 Attached to 0000:00:12.0 00:12:20.885 Reset controller to setup AER completions for this process 00:12:20.885 Registering asynchronous event callbacks... 00:12:20.885 Getting orig temperature thresholds of all controllers 00:12:20.885 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.885 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.885 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.885 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.885 Setting all controllers temperature threshold low to trigger AER 00:12:20.885 Waiting for all controllers temperature threshold to be set lower 00:12:20.885 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.885 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:20.885 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.885 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:20.885 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.885 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:20.885 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.885 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:20.885 Waiting for all controllers to trigger AER and reset threshold 00:12:20.885 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.885 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.885 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.885 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.885 Cleaning up... 00:12:20.885 00:12:20.885 real 0m0.620s 00:12:20.885 user 0m0.202s 00:12:20.885 sys 0m0.305s 00:12:20.885 12:22:44 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.885 12:22:44 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:20.885 ************************************ 00:12:20.885 END TEST nvme_multi_aen 00:12:20.885 ************************************ 00:12:20.885 12:22:44 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:20.885 12:22:44 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:20.885 12:22:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.885 12:22:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.211 ************************************ 00:12:21.211 START TEST nvme_startup 00:12:21.211 ************************************ 00:12:21.211 12:22:44 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:21.496 Initializing NVMe Controllers 00:12:21.496 Attached to 0000:00:10.0 00:12:21.496 Attached to 0000:00:11.0 00:12:21.496 Attached to 0000:00:13.0 00:12:21.496 Attached to 0000:00:12.0 00:12:21.496 Initialization complete. 00:12:21.496 Time used:194334.297 (us). 00:12:21.496 ************************************ 00:12:21.496 END TEST nvme_startup 00:12:21.496 ************************************ 00:12:21.496 00:12:21.496 real 0m0.296s 00:12:21.496 user 0m0.105s 00:12:21.496 sys 0m0.145s 00:12:21.496 12:22:44 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.496 12:22:44 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:21.496 12:22:44 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:21.496 12:22:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.496 12:22:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.496 12:22:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.496 ************************************ 00:12:21.496 START TEST nvme_multi_secondary 00:12:21.496 ************************************ 00:12:21.496 12:22:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:12:21.496 12:22:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65479 00:12:21.496 12:22:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:21.496 12:22:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65480 00:12:21.496 12:22:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:21.496 12:22:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:24.785 Initializing NVMe Controllers 00:12:24.785 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:24.785 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:24.785 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:24.785 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:24.785 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:24.785 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:24.785 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:24.785 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:24.785 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:24.785 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:24.785 Initialization complete. Launching workers. 00:12:24.785 ======================================================== 00:12:24.785 Latency(us) 00:12:24.785 Device Information : IOPS MiB/s Average min max 00:12:24.785 PCIE (0000:00:10.0) NSID 1 from core 1: 4756.52 18.58 3361.36 1838.18 12993.37 00:12:24.785 PCIE (0000:00:11.0) NSID 1 from core 1: 4756.52 18.58 3363.35 1991.52 12909.12 00:12:24.785 PCIE (0000:00:13.0) NSID 1 from core 1: 4756.52 18.58 3363.46 2022.60 12781.10 00:12:24.785 PCIE (0000:00:12.0) NSID 1 from core 1: 4756.52 18.58 3363.67 1882.38 12518.89 00:12:24.785 PCIE (0000:00:12.0) NSID 2 from core 1: 4756.52 18.58 3363.92 1922.51 12420.98 00:12:24.785 PCIE (0000:00:12.0) NSID 3 from core 1: 4756.52 18.58 3364.17 1994.88 12787.83 00:12:24.785 ======================================================== 00:12:24.785 Total : 28539.13 111.48 3363.32 1838.18 12993.37 00:12:24.785 00:12:25.044 Initializing NVMe Controllers 00:12:25.044 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:25.044 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:25.044 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:25.044 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:25.044 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:25.044 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:25.044 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:25.044 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:25.044 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:25.044 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:25.044 Initialization complete. Launching workers. 00:12:25.044 ======================================================== 00:12:25.044 Latency(us) 00:12:25.044 Device Information : IOPS MiB/s Average min max 00:12:25.044 PCIE (0000:00:10.0) NSID 1 from core 2: 3146.53 12.29 5082.75 1025.64 12402.45 00:12:25.044 PCIE (0000:00:11.0) NSID 1 from core 2: 3146.53 12.29 5083.97 1019.03 11184.40 00:12:25.044 PCIE (0000:00:13.0) NSID 1 from core 2: 3146.53 12.29 5083.02 1027.72 13635.55 00:12:25.044 PCIE (0000:00:12.0) NSID 1 from core 2: 3146.53 12.29 5083.06 1008.24 13447.35 00:12:25.044 PCIE (0000:00:12.0) NSID 2 from core 2: 3146.53 12.29 5082.64 1027.64 13832.05 00:12:25.044 PCIE (0000:00:12.0) NSID 3 from core 2: 3146.53 12.29 5081.36 1029.54 13676.14 00:12:25.044 ======================================================== 00:12:25.044 Total : 18879.16 73.75 5082.80 1008.24 13832.05 00:12:25.044 00:12:25.044 12:22:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65479 00:12:26.949 Initializing NVMe Controllers 00:12:26.949 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:26.949 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:26.949 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:26.949 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:26.949 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:26.949 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:26.949 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:26.949 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:26.949 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:26.949 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:26.949 Initialization complete. Launching workers. 00:12:26.949 ======================================================== 00:12:26.949 Latency(us) 00:12:26.949 Device Information : IOPS MiB/s Average min max 00:12:26.949 PCIE (0000:00:10.0) NSID 1 from core 0: 8071.81 31.53 1980.66 948.66 10770.41 00:12:26.949 PCIE (0000:00:11.0) NSID 1 from core 0: 8071.81 31.53 1981.74 966.47 9285.74 00:12:26.949 PCIE (0000:00:13.0) NSID 1 from core 0: 8071.81 31.53 1981.70 893.07 8333.33 00:12:26.949 PCIE (0000:00:12.0) NSID 1 from core 0: 8071.81 31.53 1981.68 838.37 8767.61 00:12:26.949 PCIE (0000:00:12.0) NSID 2 from core 0: 8071.81 31.53 1981.65 780.85 9058.29 00:12:26.949 PCIE (0000:00:12.0) NSID 3 from core 0: 8075.01 31.54 1980.84 710.16 9772.64 00:12:26.949 ======================================================== 00:12:26.949 Total : 48434.04 189.20 1981.38 710.16 10770.41 00:12:26.949 00:12:26.949 12:22:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65480 00:12:26.949 12:22:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65549 00:12:26.949 12:22:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:26.949 12:22:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65550 00:12:26.949 12:22:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:26.949 12:22:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:30.301 Initializing NVMe Controllers 00:12:30.301 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:30.301 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:30.301 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:30.301 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:30.301 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:30.301 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:30.301 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:30.301 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:30.301 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:30.301 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:30.301 Initialization complete. Launching workers. 00:12:30.301 ======================================================== 00:12:30.301 Latency(us) 00:12:30.301 Device Information : IOPS MiB/s Average min max 00:12:30.301 PCIE (0000:00:10.0) NSID 1 from core 0: 5253.59 20.52 3043.28 952.18 8613.87 00:12:30.301 PCIE (0000:00:11.0) NSID 1 from core 0: 5253.59 20.52 3045.04 976.91 8928.86 00:12:30.301 PCIE (0000:00:13.0) NSID 1 from core 0: 5253.59 20.52 3045.11 973.47 8955.11 00:12:30.301 PCIE (0000:00:12.0) NSID 1 from core 0: 5253.59 20.52 3045.22 974.86 9317.87 00:12:30.301 PCIE (0000:00:12.0) NSID 2 from core 0: 5253.59 20.52 3045.31 978.98 9614.05 00:12:30.301 PCIE (0000:00:12.0) NSID 3 from core 0: 5258.92 20.54 3042.34 982.19 9827.37 00:12:30.301 ======================================================== 00:12:30.301 Total : 31526.89 123.15 3044.38 952.18 9827.37 00:12:30.301 00:12:30.301 Initializing NVMe Controllers 00:12:30.301 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:30.301 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:30.301 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:30.301 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:30.301 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:30.301 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:30.301 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:30.301 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:30.301 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:30.301 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:30.301 Initialization complete. Launching workers. 00:12:30.301 ======================================================== 00:12:30.301 Latency(us) 00:12:30.301 Device Information : IOPS MiB/s Average min max 00:12:30.301 PCIE (0000:00:10.0) NSID 1 from core 1: 5416.82 21.16 2951.62 1003.80 11329.60 00:12:30.301 PCIE (0000:00:11.0) NSID 1 from core 1: 5416.82 21.16 2953.15 1027.21 11278.53 00:12:30.301 PCIE (0000:00:13.0) NSID 1 from core 1: 5416.82 21.16 2953.06 1025.96 11293.14 00:12:30.301 PCIE (0000:00:12.0) NSID 1 from core 1: 5416.82 21.16 2952.99 1020.21 11399.77 00:12:30.301 PCIE (0000:00:12.0) NSID 2 from core 1: 5416.82 21.16 2952.92 1027.51 11453.95 00:12:30.301 PCIE (0000:00:12.0) NSID 3 from core 1: 5416.82 21.16 2952.84 1014.49 11459.72 00:12:30.301 ======================================================== 00:12:30.301 Total : 32500.91 126.96 2952.76 1003.80 11459.72 00:12:30.301 00:12:32.833 Initializing NVMe Controllers 00:12:32.833 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:32.833 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:32.833 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:32.833 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:32.834 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:32.834 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:32.834 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:32.834 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:32.834 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:32.834 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:32.834 Initialization complete. Launching workers. 00:12:32.834 ======================================================== 00:12:32.834 Latency(us) 00:12:32.834 Device Information : IOPS MiB/s Average min max 00:12:32.834 PCIE (0000:00:10.0) NSID 1 from core 2: 3016.73 11.78 5301.64 1146.11 11458.99 00:12:32.834 PCIE (0000:00:11.0) NSID 1 from core 2: 3016.73 11.78 5303.52 1120.25 11640.26 00:12:32.834 PCIE (0000:00:13.0) NSID 1 from core 2: 3016.73 11.78 5303.42 1134.24 12041.34 00:12:32.834 PCIE (0000:00:12.0) NSID 1 from core 2: 3016.73 11.78 5306.79 1145.37 12199.85 00:12:32.834 PCIE (0000:00:12.0) NSID 2 from core 2: 3016.73 11.78 5307.51 1140.05 11971.68 00:12:32.834 PCIE (0000:00:12.0) NSID 3 from core 2: 3016.73 11.78 5306.88 1222.43 12268.72 00:12:32.834 ======================================================== 00:12:32.834 Total : 18100.36 70.70 5304.96 1120.25 12268.72 00:12:32.834 00:12:32.834 ************************************ 00:12:32.834 END TEST nvme_multi_secondary 00:12:32.834 ************************************ 00:12:32.834 12:22:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65549 00:12:32.834 12:22:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65550 00:12:32.834 00:12:32.834 real 0m11.045s 00:12:32.834 user 0m18.539s 00:12:32.834 sys 0m1.053s 00:12:32.834 12:22:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.834 12:22:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:32.834 12:22:55 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:32.834 12:22:55 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:32.834 12:22:55 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/64487 ]] 00:12:32.834 12:22:55 nvme -- common/autotest_common.sh@1090 -- # kill 64487 00:12:32.834 12:22:55 nvme -- common/autotest_common.sh@1091 -- # wait 64487 00:12:32.834 [2024-10-07 12:22:55.662561] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.663059] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.663133] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.663176] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.668507] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.668592] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.668629] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.668668] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.673696] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.673757] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.673780] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.673806] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.677410] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.677642] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.677673] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 [2024-10-07 12:22:55.677699] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65422) is not found. Dropping the request. 00:12:32.834 12:22:55 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:12:32.834 12:22:55 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:12:32.834 12:22:55 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:32.834 12:22:55 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:32.834 12:22:55 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.834 12:22:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.834 ************************************ 00:12:32.834 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:32.834 ************************************ 00:12:32.834 12:22:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:32.834 * Looking for test storage... 00:12:32.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:32.834 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:32.834 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:12:32.834 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:33.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.094 --rc genhtml_branch_coverage=1 00:12:33.094 --rc genhtml_function_coverage=1 00:12:33.094 --rc genhtml_legend=1 00:12:33.094 --rc geninfo_all_blocks=1 00:12:33.094 --rc geninfo_unexecuted_blocks=1 00:12:33.094 00:12:33.094 ' 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:33.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.094 --rc genhtml_branch_coverage=1 00:12:33.094 --rc genhtml_function_coverage=1 00:12:33.094 --rc genhtml_legend=1 00:12:33.094 --rc geninfo_all_blocks=1 00:12:33.094 --rc geninfo_unexecuted_blocks=1 00:12:33.094 00:12:33.094 ' 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:33.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.094 --rc genhtml_branch_coverage=1 00:12:33.094 --rc genhtml_function_coverage=1 00:12:33.094 --rc genhtml_legend=1 00:12:33.094 --rc geninfo_all_blocks=1 00:12:33.094 --rc geninfo_unexecuted_blocks=1 00:12:33.094 00:12:33.094 ' 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:33.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.094 --rc genhtml_branch_coverage=1 00:12:33.094 --rc genhtml_function_coverage=1 00:12:33.094 --rc genhtml_legend=1 00:12:33.094 --rc geninfo_all_blocks=1 00:12:33.094 --rc geninfo_unexecuted_blocks=1 00:12:33.094 00:12:33.094 ' 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:12:33.094 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65717 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65717 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 65717 ']' 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.095 12:22:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:33.354 [2024-10-07 12:22:56.414591] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:12:33.354 [2024-10-07 12:22:56.415157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65717 ] 00:12:33.354 [2024-10-07 12:22:56.607330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.613 [2024-10-07 12:22:56.825824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.614 [2024-10-07 12:22:56.826053] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.614 [2024-10-07 12:22:56.826165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.614 [2024-10-07 12:22:56.826203] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:34.549 nvme0n1 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_CfJqD.txt 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:34.549 true 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728303777 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65746 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:34.549 12:22:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:37.083 [2024-10-07 12:22:59.833110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:37.083 [2024-10-07 12:22:59.833641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:37.083 [2024-10-07 12:22:59.833673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:37.083 [2024-10-07 12:22:59.833690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.083 [2024-10-07 12:22:59.836047] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:37.083 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65746 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65746 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65746 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_CfJqD.txt 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_CfJqD.txt 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65717 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 65717 ']' 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 65717 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65717 00:12:37.083 killing process with pid 65717 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65717' 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 65717 00:12:37.083 12:22:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 65717 00:12:39.624 12:23:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:39.624 12:23:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:39.624 00:12:39.624 real 0m6.636s 00:12:39.624 user 0m22.375s 00:12:39.624 sys 0m0.838s 00:12:39.624 12:23:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.624 12:23:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:39.624 ************************************ 00:12:39.624 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:39.624 ************************************ 00:12:39.624 12:23:02 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:39.624 12:23:02 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:39.624 12:23:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:39.624 12:23:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.624 12:23:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.624 ************************************ 00:12:39.624 START TEST nvme_fio 00:12:39.624 ************************************ 00:12:39.624 12:23:02 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:12:39.624 12:23:02 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:39.624 12:23:02 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:39.624 12:23:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:39.624 12:23:02 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:12:39.624 12:23:02 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:12:39.624 12:23:02 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:39.624 12:23:02 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:39.624 12:23:02 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:12:39.624 12:23:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:12:39.624 12:23:02 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:39.624 12:23:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:39.624 12:23:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:39.624 12:23:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:39.624 12:23:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:39.624 12:23:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:39.882 12:23:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:39.882 12:23:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:40.141 12:23:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:40.141 12:23:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:40.141 12:23:03 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:40.399 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:40.399 fio-3.35 00:12:40.399 Starting 1 thread 00:12:44.590 00:12:44.590 test: (groupid=0, jobs=1): err= 0: pid=65898: Mon Oct 7 12:23:07 2024 00:12:44.590 read: IOPS=22.4k, BW=87.5MiB/s (91.8MB/s)(175MiB/2001msec) 00:12:44.590 slat (nsec): min=3741, max=69216, avg=4596.05, stdev=1200.03 00:12:44.590 clat (usec): min=247, max=11853, avg=2848.46, stdev=453.42 00:12:44.590 lat (usec): min=251, max=11921, avg=2853.06, stdev=454.07 00:12:44.590 clat percentiles (usec): 00:12:44.590 | 1.00th=[ 2442], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:12:44.590 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:12:44.590 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3097], 00:12:44.590 | 99.00th=[ 4555], 99.50th=[ 5604], 99.90th=[ 8586], 99.95th=[ 9110], 00:12:44.590 | 99.99th=[11600] 00:12:44.590 bw ( KiB/s): min=88984, max=93800, per=100.00%, avg=90789.33, stdev=2624.52, samples=3 00:12:44.590 iops : min=22246, max=23450, avg=22697.33, stdev=656.13, samples=3 00:12:44.590 write: IOPS=22.3k, BW=86.9MiB/s (91.2MB/s)(174MiB/2001msec); 0 zone resets 00:12:44.590 slat (nsec): min=3824, max=54478, avg=4834.36, stdev=1290.33 00:12:44.590 clat (usec): min=190, max=11689, avg=2856.71, stdev=471.65 00:12:44.590 lat (usec): min=195, max=11702, avg=2861.55, stdev=472.32 00:12:44.590 clat percentiles (usec): 00:12:44.590 | 1.00th=[ 2442], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:12:44.590 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:12:44.590 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3097], 00:12:44.590 | 99.00th=[ 4817], 99.50th=[ 5932], 99.90th=[ 8586], 99.95th=[ 9241], 00:12:44.590 | 99.99th=[11338] 00:12:44.590 bw ( KiB/s): min=89080, max=92952, per=100.00%, avg=90976.00, stdev=1937.24, samples=3 00:12:44.590 iops : min=22272, max=23236, avg=22744.00, stdev=482.31, samples=3 00:12:44.590 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:44.590 lat (msec) : 2=0.39%, 4=97.58%, 10=1.96%, 20=0.04% 00:12:44.590 cpu : usr=99.40%, sys=0.05%, ctx=3, majf=0, minf=608 00:12:44.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:44.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.590 issued rwts: total=44831,44539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.590 00:12:44.590 Run status group 0 (all jobs): 00:12:44.590 READ: bw=87.5MiB/s (91.8MB/s), 87.5MiB/s-87.5MiB/s (91.8MB/s-91.8MB/s), io=175MiB (184MB), run=2001-2001msec 00:12:44.590 WRITE: bw=86.9MiB/s (91.2MB/s), 86.9MiB/s-86.9MiB/s (91.2MB/s-91.2MB/s), io=174MiB (182MB), run=2001-2001msec 00:12:44.849 ----------------------------------------------------- 00:12:44.849 Suppressions used: 00:12:44.849 count bytes template 00:12:44.849 1 32 /usr/src/fio/parse.c 00:12:44.849 1 8 libtcmalloc_minimal.so 00:12:44.849 ----------------------------------------------------- 00:12:44.849 00:12:44.849 12:23:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:44.849 12:23:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:44.849 12:23:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:44.849 12:23:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:45.108 12:23:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:45.108 12:23:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:45.368 12:23:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:45.368 12:23:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:45.368 12:23:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:45.627 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:45.627 fio-3.35 00:12:45.627 Starting 1 thread 00:12:50.906 00:12:50.906 test: (groupid=0, jobs=1): err= 0: pid=65964: Mon Oct 7 12:23:14 2024 00:12:50.906 read: IOPS=21.6k, BW=84.5MiB/s (88.7MB/s)(169MiB/2001msec) 00:12:50.906 slat (nsec): min=3774, max=75066, avg=4854.41, stdev=1296.72 00:12:50.906 clat (usec): min=222, max=9567, avg=2952.56, stdev=334.09 00:12:50.906 lat (usec): min=228, max=9642, avg=2957.42, stdev=334.57 00:12:50.906 clat percentiles (usec): 00:12:50.906 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:12:50.906 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2933], 00:12:50.906 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3654], 00:12:50.906 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 5145], 99.95th=[ 7373], 00:12:50.906 | 99.99th=[ 9372] 00:12:50.906 bw ( KiB/s): min=81328, max=90896, per=100.00%, avg=87058.00, stdev=5056.82, samples=3 00:12:50.906 iops : min=20332, max=22724, avg=21764.33, stdev=1264.11, samples=3 00:12:50.906 write: IOPS=21.5k, BW=83.9MiB/s (88.0MB/s)(168MiB/2001msec); 0 zone resets 00:12:50.906 slat (usec): min=3, max=105, avg= 5.07, stdev= 1.43 00:12:50.906 clat (usec): min=300, max=9476, avg=2956.15, stdev=334.35 00:12:50.906 lat (usec): min=305, max=9490, avg=2961.22, stdev=334.79 00:12:50.906 clat percentiles (usec): 00:12:50.906 | 1.00th=[ 2507], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2769], 00:12:50.906 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2933], 00:12:50.906 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3654], 00:12:50.906 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 5735], 99.95th=[ 7570], 00:12:50.906 | 99.99th=[ 9110] 00:12:50.906 bw ( KiB/s): min=82672, max=90464, per=100.00%, avg=87207.67, stdev=4050.47, samples=3 00:12:50.906 iops : min=20668, max=22616, avg=21801.67, stdev=1012.50, samples=3 00:12:50.906 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:50.906 lat (msec) : 2=0.05%, 4=98.11%, 10=1.81% 00:12:50.906 cpu : usr=99.35%, sys=0.05%, ctx=12, majf=0, minf=608 00:12:50.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:50.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:50.906 issued rwts: total=43309,42982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:50.906 00:12:50.906 Run status group 0 (all jobs): 00:12:50.906 READ: bw=84.5MiB/s (88.7MB/s), 84.5MiB/s-84.5MiB/s (88.7MB/s-88.7MB/s), io=169MiB (177MB), run=2001-2001msec 00:12:50.906 WRITE: bw=83.9MiB/s (88.0MB/s), 83.9MiB/s-83.9MiB/s (88.0MB/s-88.0MB/s), io=168MiB (176MB), run=2001-2001msec 00:12:51.163 ----------------------------------------------------- 00:12:51.163 Suppressions used: 00:12:51.163 count bytes template 00:12:51.163 1 32 /usr/src/fio/parse.c 00:12:51.163 1 8 libtcmalloc_minimal.so 00:12:51.163 ----------------------------------------------------- 00:12:51.163 00:12:51.163 12:23:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:51.163 12:23:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:51.163 12:23:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:51.163 12:23:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:51.727 12:23:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:51.727 12:23:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:51.727 12:23:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:51.727 12:23:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:51.727 12:23:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:51.985 12:23:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:51.985 12:23:15 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:51.985 12:23:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:51.985 12:23:15 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:51.985 12:23:15 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:51.985 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:51.985 fio-3.35 00:12:51.985 Starting 1 thread 00:12:56.176 00:12:56.176 test: (groupid=0, jobs=1): err= 0: pid=66030: Mon Oct 7 12:23:18 2024 00:12:56.176 read: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec) 00:12:56.176 slat (nsec): min=3757, max=73210, avg=4581.53, stdev=993.46 00:12:56.176 clat (usec): min=185, max=10394, avg=2823.79, stdev=258.37 00:12:56.176 lat (usec): min=189, max=10467, avg=2828.37, stdev=258.70 00:12:56.176 clat percentiles (usec): 00:12:56.176 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:12:56.176 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2802], 60.00th=[ 2835], 00:12:56.176 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 2999], 00:12:56.176 | 99.00th=[ 3392], 99.50th=[ 3884], 99.90th=[ 5407], 99.95th=[ 7832], 00:12:56.176 | 99.99th=[10290] 00:12:56.176 bw ( KiB/s): min=88704, max=90888, per=99.51%, avg=89949.33, stdev=1123.83, samples=3 00:12:56.176 iops : min=22176, max=22722, avg=22487.33, stdev=280.96, samples=3 00:12:56.176 write: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec); 0 zone resets 00:12:56.176 slat (nsec): min=3846, max=38111, avg=4802.53, stdev=1019.71 00:12:56.176 clat (usec): min=231, max=10287, avg=2829.28, stdev=261.13 00:12:56.176 lat (usec): min=235, max=10300, avg=2834.08, stdev=261.44 00:12:56.176 clat percentiles (usec): 00:12:56.176 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:12:56.176 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2802], 60.00th=[ 2835], 00:12:56.176 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 2999], 00:12:56.176 | 99.00th=[ 3490], 99.50th=[ 3949], 99.90th=[ 6063], 99.95th=[ 8029], 00:12:56.176 | 99.99th=[10028] 00:12:56.176 bw ( KiB/s): min=88104, max=92168, per=100.00%, avg=90165.33, stdev=2032.64, samples=3 00:12:56.176 iops : min=22026, max=23042, avg=22541.33, stdev=508.16, samples=3 00:12:56.176 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:56.176 lat (msec) : 2=0.43%, 4=99.07%, 10=0.45%, 20=0.01% 00:12:56.176 cpu : usr=99.40%, sys=0.00%, ctx=2, majf=0, minf=608 00:12:56.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:56.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:56.176 issued rwts: total=45219,44971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:56.176 00:12:56.176 Run status group 0 (all jobs): 00:12:56.176 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec 00:12:56.176 WRITE: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:12:56.176 ----------------------------------------------------- 00:12:56.176 Suppressions used: 00:12:56.176 count bytes template 00:12:56.176 1 32 /usr/src/fio/parse.c 00:12:56.176 1 8 libtcmalloc_minimal.so 00:12:56.176 ----------------------------------------------------- 00:12:56.176 00:12:56.176 12:23:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:56.176 12:23:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:56.176 12:23:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:56.176 12:23:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:56.434 12:23:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:56.434 12:23:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:56.694 12:23:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:56.694 12:23:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:56.694 12:23:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:56.953 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:56.953 fio-3.35 00:12:56.953 Starting 1 thread 00:13:02.221 00:13:02.221 test: (groupid=0, jobs=1): err= 0: pid=66091: Mon Oct 7 12:23:24 2024 00:13:02.221 read: IOPS=22.6k, BW=88.2MiB/s (92.5MB/s)(176MiB/2001msec) 00:13:02.221 slat (nsec): min=3799, max=74581, avg=4606.69, stdev=1191.83 00:13:02.221 clat (usec): min=244, max=9842, avg=2826.77, stdev=306.57 00:13:02.221 lat (usec): min=248, max=9917, avg=2831.38, stdev=306.85 00:13:02.221 clat percentiles (usec): 00:13:02.221 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2704], 00:13:02.221 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2835], 00:13:02.221 | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3097], 00:13:02.221 | 99.00th=[ 3785], 99.50th=[ 4424], 99.90th=[ 6063], 99.95th=[ 7177], 00:13:02.221 | 99.99th=[ 9503] 00:13:02.221 bw ( KiB/s): min=88712, max=91792, per=99.97%, avg=90274.67, stdev=1540.50, samples=3 00:13:02.221 iops : min=22178, max=22948, avg=22568.67, stdev=385.13, samples=3 00:13:02.221 write: IOPS=22.5k, BW=87.7MiB/s (92.0MB/s)(176MiB/2001msec); 0 zone resets 00:13:02.221 slat (nsec): min=3865, max=42160, avg=4789.99, stdev=1080.93 00:13:02.221 clat (usec): min=224, max=9613, avg=2832.04, stdev=306.67 00:13:02.221 lat (usec): min=229, max=9626, avg=2836.83, stdev=306.94 00:13:02.221 clat percentiles (usec): 00:13:02.221 | 1.00th=[ 2147], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2704], 00:13:02.221 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:13:02.221 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3097], 00:13:02.221 | 99.00th=[ 3851], 99.50th=[ 4424], 99.90th=[ 6063], 99.95th=[ 7439], 00:13:02.221 | 99.99th=[ 9241] 00:13:02.221 bw ( KiB/s): min=88128, max=91904, per=100.00%, avg=90504.00, stdev=2068.57, samples=3 00:13:02.221 iops : min=22032, max=22976, avg=22626.00, stdev=517.14, samples=3 00:13:02.221 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:02.221 lat (msec) : 2=0.63%, 4=98.62%, 10=0.71% 00:13:02.221 cpu : usr=99.15%, sys=0.20%, ctx=4, majf=0, minf=606 00:13:02.221 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:02.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.221 issued rwts: total=45175,44935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.221 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.221 00:13:02.221 Run status group 0 (all jobs): 00:13:02.221 READ: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=176MiB (185MB), run=2001-2001msec 00:13:02.221 WRITE: bw=87.7MiB/s (92.0MB/s), 87.7MiB/s-87.7MiB/s (92.0MB/s-92.0MB/s), io=176MiB (184MB), run=2001-2001msec 00:13:02.221 ----------------------------------------------------- 00:13:02.221 Suppressions used: 00:13:02.221 count bytes template 00:13:02.221 1 32 /usr/src/fio/parse.c 00:13:02.221 1 8 libtcmalloc_minimal.so 00:13:02.221 ----------------------------------------------------- 00:13:02.221 00:13:02.221 ************************************ 00:13:02.221 END TEST nvme_fio 00:13:02.221 ************************************ 00:13:02.221 12:23:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:02.221 12:23:25 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:02.221 00:13:02.221 real 0m22.487s 00:13:02.221 user 0m15.590s 00:13:02.221 sys 0m10.517s 00:13:02.221 12:23:25 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.221 12:23:25 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:02.221 ************************************ 00:13:02.221 END TEST nvme 00:13:02.221 ************************************ 00:13:02.221 00:13:02.221 real 1m38.010s 00:13:02.221 user 3m43.815s 00:13:02.221 sys 0m29.765s 00:13:02.221 12:23:25 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.221 12:23:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.221 12:23:25 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:02.221 12:23:25 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:02.221 12:23:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:02.221 12:23:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.221 12:23:25 -- common/autotest_common.sh@10 -- # set +x 00:13:02.221 ************************************ 00:13:02.221 START TEST nvme_scc 00:13:02.221 ************************************ 00:13:02.221 12:23:25 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:02.221 * Looking for test storage... 00:13:02.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:02.221 12:23:25 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:02.221 12:23:25 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:13:02.221 12:23:25 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:02.221 12:23:25 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.221 12:23:25 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:02.221 12:23:25 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.221 12:23:25 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:02.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.221 --rc genhtml_branch_coverage=1 00:13:02.221 --rc genhtml_function_coverage=1 00:13:02.221 --rc genhtml_legend=1 00:13:02.221 --rc geninfo_all_blocks=1 00:13:02.221 --rc geninfo_unexecuted_blocks=1 00:13:02.221 00:13:02.221 ' 00:13:02.221 12:23:25 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:02.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.221 --rc genhtml_branch_coverage=1 00:13:02.221 --rc genhtml_function_coverage=1 00:13:02.221 --rc genhtml_legend=1 00:13:02.221 --rc geninfo_all_blocks=1 00:13:02.221 --rc geninfo_unexecuted_blocks=1 00:13:02.221 00:13:02.221 ' 00:13:02.221 12:23:25 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:02.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.221 --rc genhtml_branch_coverage=1 00:13:02.221 --rc genhtml_function_coverage=1 00:13:02.221 --rc genhtml_legend=1 00:13:02.221 --rc geninfo_all_blocks=1 00:13:02.221 --rc geninfo_unexecuted_blocks=1 00:13:02.221 00:13:02.222 ' 00:13:02.222 12:23:25 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:02.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.222 --rc genhtml_branch_coverage=1 00:13:02.222 --rc genhtml_function_coverage=1 00:13:02.222 --rc genhtml_legend=1 00:13:02.222 --rc geninfo_all_blocks=1 00:13:02.222 --rc geninfo_unexecuted_blocks=1 00:13:02.222 00:13:02.222 ' 00:13:02.222 12:23:25 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:02.222 12:23:25 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:02.222 12:23:25 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.480 12:23:25 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:02.480 12:23:25 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.480 12:23:25 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.480 12:23:25 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.480 12:23:25 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.480 12:23:25 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.480 12:23:25 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.480 12:23:25 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:02.480 12:23:25 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:02.480 12:23:25 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:02.480 12:23:25 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.480 12:23:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:02.480 12:23:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:02.480 12:23:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:02.480 12:23:25 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:03.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:03.305 Waiting for block devices as requested 00:13:03.305 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.305 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.563 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.563 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.840 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:08.840 12:23:31 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:08.840 12:23:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:08.840 12:23:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:08.840 12:23:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:08.840 12:23:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.840 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.841 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.842 12:23:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.843 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:08.844 12:23:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:08.844 12:23:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:08.844 12:23:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:08.844 12:23:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.844 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:08.845 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:08.846 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.847 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:08.848 12:23:32 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:08.848 12:23:32 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:08.848 12:23:32 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:08.848 12:23:32 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.848 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.849 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.850 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:08.851 12:23:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:09.115 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.116 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.117 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.118 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.119 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:09.120 12:23:32 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:09.120 12:23:32 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:09.120 12:23:32 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:09.120 12:23:32 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.120 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:09.121 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.122 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:09.123 12:23:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:09.123 12:23:32 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:09.123 12:23:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:09.123 12:23:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:09.123 12:23:32 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:10.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:10.628 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.628 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.628 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.628 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:10.887 12:23:33 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:10.887 12:23:33 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:10.887 12:23:33 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:10.887 12:23:33 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:10.887 ************************************ 00:13:10.887 START TEST nvme_simple_copy 00:13:10.887 ************************************ 00:13:10.887 12:23:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:11.146 Initializing NVMe Controllers 00:13:11.146 Attaching to 0000:00:10.0 00:13:11.146 Controller supports SCC. Attached to 0000:00:10.0 00:13:11.146 Namespace ID: 1 size: 6GB 00:13:11.146 Initialization complete. 00:13:11.146 00:13:11.146 Controller QEMU NVMe Ctrl (12340 ) 00:13:11.146 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:11.146 Namespace Block Size:4096 00:13:11.146 Writing LBAs 0 to 63 with Random Data 00:13:11.146 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:11.146 LBAs matching Written Data: 64 00:13:11.146 ************************************ 00:13:11.146 END TEST nvme_simple_copy 00:13:11.146 ************************************ 00:13:11.146 00:13:11.146 real 0m0.307s 00:13:11.146 user 0m0.115s 00:13:11.146 sys 0m0.091s 00:13:11.146 12:23:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.146 12:23:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:11.146 ************************************ 00:13:11.146 END TEST nvme_scc 00:13:11.146 ************************************ 00:13:11.146 00:13:11.146 real 0m9.090s 00:13:11.146 user 0m1.584s 00:13:11.146 sys 0m2.532s 00:13:11.146 12:23:34 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.146 12:23:34 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:11.146 12:23:34 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:11.146 12:23:34 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:11.146 12:23:34 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:11.146 12:23:34 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:11.146 12:23:34 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:11.146 12:23:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:11.146 12:23:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.146 12:23:34 -- common/autotest_common.sh@10 -- # set +x 00:13:11.146 ************************************ 00:13:11.146 START TEST nvme_fdp 00:13:11.146 ************************************ 00:13:11.146 12:23:34 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:13:11.406 * Looking for test storage... 00:13:11.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:11.406 12:23:34 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:11.406 12:23:34 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:13:11.406 12:23:34 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:11.406 12:23:34 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.406 12:23:34 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:11.407 12:23:34 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.407 12:23:34 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:11.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.407 --rc genhtml_branch_coverage=1 00:13:11.407 --rc genhtml_function_coverage=1 00:13:11.407 --rc genhtml_legend=1 00:13:11.407 --rc geninfo_all_blocks=1 00:13:11.407 --rc geninfo_unexecuted_blocks=1 00:13:11.407 00:13:11.407 ' 00:13:11.407 12:23:34 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:11.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.407 --rc genhtml_branch_coverage=1 00:13:11.407 --rc genhtml_function_coverage=1 00:13:11.407 --rc genhtml_legend=1 00:13:11.407 --rc geninfo_all_blocks=1 00:13:11.407 --rc geninfo_unexecuted_blocks=1 00:13:11.407 00:13:11.407 ' 00:13:11.407 12:23:34 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:11.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.407 --rc genhtml_branch_coverage=1 00:13:11.407 --rc genhtml_function_coverage=1 00:13:11.407 --rc genhtml_legend=1 00:13:11.407 --rc geninfo_all_blocks=1 00:13:11.407 --rc geninfo_unexecuted_blocks=1 00:13:11.407 00:13:11.407 ' 00:13:11.407 12:23:34 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:11.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.407 --rc genhtml_branch_coverage=1 00:13:11.407 --rc genhtml_function_coverage=1 00:13:11.407 --rc genhtml_legend=1 00:13:11.407 --rc geninfo_all_blocks=1 00:13:11.407 --rc geninfo_unexecuted_blocks=1 00:13:11.407 00:13:11.407 ' 00:13:11.407 12:23:34 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:11.407 12:23:34 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:11.407 12:23:34 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:11.407 12:23:34 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:11.407 12:23:34 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.407 12:23:34 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.407 12:23:34 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.407 12:23:34 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.407 12:23:34 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.407 12:23:34 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.407 12:23:34 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.407 12:23:34 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.407 12:23:34 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:11.407 12:23:34 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.407 12:23:34 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:11.666 12:23:34 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:11.666 12:23:34 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:11.666 12:23:34 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:11.666 12:23:34 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:11.666 12:23:34 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:11.666 12:23:34 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:11.666 12:23:34 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:11.666 12:23:34 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:11.666 12:23:34 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.666 12:23:34 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:12.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:12.235 Waiting for block devices as requested 00:13:12.495 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:12.495 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:12.495 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:12.754 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:18.072 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:18.072 12:23:41 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:18.072 12:23:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:18.072 12:23:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:18.072 12:23:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:18.072 12:23:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:18.072 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.073 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.074 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:18.075 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:18.076 12:23:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:18.076 12:23:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:18.076 12:23:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:18.076 12:23:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.076 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.077 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.078 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:18.079 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:18.080 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:18.081 12:23:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:18.081 12:23:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:18.081 12:23:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:18.081 12:23:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.081 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.082 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:18.083 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.084 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:18.085 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:18.086 12:23:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.087 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:18.088 12:23:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:18.088 12:23:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:18.088 12:23:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:18.088 12:23:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:18.088 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:18.349 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.350 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:18.351 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:18.352 12:23:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:18.352 12:23:41 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:18.352 12:23:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:18.352 12:23:41 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:18.352 12:23:41 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:18.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:19.865 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.865 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.865 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.865 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.865 12:23:43 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:19.865 12:23:43 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:19.865 12:23:43 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.865 12:23:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:19.865 ************************************ 00:13:19.865 START TEST nvme_flexible_data_placement 00:13:19.865 ************************************ 00:13:19.865 12:23:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:20.124 Initializing NVMe Controllers 00:13:20.124 Attaching to 0000:00:13.0 00:13:20.124 Controller supports FDP Attached to 0000:00:13.0 00:13:20.124 Namespace ID: 1 Endurance Group ID: 1 00:13:20.124 Initialization complete. 00:13:20.124 00:13:20.124 ================================== 00:13:20.124 == FDP tests for Namespace: #01 == 00:13:20.124 ================================== 00:13:20.124 00:13:20.124 Get Feature: FDP: 00:13:20.124 ================= 00:13:20.124 Enabled: Yes 00:13:20.124 FDP configuration Index: 0 00:13:20.124 00:13:20.124 FDP configurations log page 00:13:20.124 =========================== 00:13:20.124 Number of FDP configurations: 1 00:13:20.124 Version: 0 00:13:20.124 Size: 112 00:13:20.124 FDP Configuration Descriptor: 0 00:13:20.124 Descriptor Size: 96 00:13:20.124 Reclaim Group Identifier format: 2 00:13:20.124 FDP Volatile Write Cache: Not Present 00:13:20.124 FDP Configuration: Valid 00:13:20.124 Vendor Specific Size: 0 00:13:20.124 Number of Reclaim Groups: 2 00:13:20.124 Number of Recalim Unit Handles: 8 00:13:20.124 Max Placement Identifiers: 128 00:13:20.124 Number of Namespaces Suppprted: 256 00:13:20.124 Reclaim unit Nominal Size: 6000000 bytes 00:13:20.124 Estimated Reclaim Unit Time Limit: Not Reported 00:13:20.124 RUH Desc #000: RUH Type: Initially Isolated 00:13:20.124 RUH Desc #001: RUH Type: Initially Isolated 00:13:20.124 RUH Desc #002: RUH Type: Initially Isolated 00:13:20.124 RUH Desc #003: RUH Type: Initially Isolated 00:13:20.124 RUH Desc #004: RUH Type: Initially Isolated 00:13:20.124 RUH Desc #005: RUH Type: Initially Isolated 00:13:20.124 RUH Desc #006: RUH Type: Initially Isolated 00:13:20.124 RUH Desc #007: RUH Type: Initially Isolated 00:13:20.124 00:13:20.124 FDP reclaim unit handle usage log page 00:13:20.124 ====================================== 00:13:20.124 Number of Reclaim Unit Handles: 8 00:13:20.124 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:20.124 RUH Usage Desc #001: RUH Attributes: Unused 00:13:20.124 RUH Usage Desc #002: RUH Attributes: Unused 00:13:20.124 RUH Usage Desc #003: RUH Attributes: Unused 00:13:20.124 RUH Usage Desc #004: RUH Attributes: Unused 00:13:20.124 RUH Usage Desc #005: RUH Attributes: Unused 00:13:20.124 RUH Usage Desc #006: RUH Attributes: Unused 00:13:20.124 RUH Usage Desc #007: RUH Attributes: Unused 00:13:20.124 00:13:20.124 FDP statistics log page 00:13:20.124 ======================= 00:13:20.124 Host bytes with metadata written: 1080303616 00:13:20.124 Media bytes with metadata written: 1080422400 00:13:20.124 Media bytes erased: 0 00:13:20.124 00:13:20.124 FDP Reclaim unit handle status 00:13:20.124 ============================== 00:13:20.124 Number of RUHS descriptors: 2 00:13:20.124 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000019be 00:13:20.124 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:20.124 00:13:20.124 FDP write on placement id: 0 success 00:13:20.124 00:13:20.124 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:20.124 00:13:20.124 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:20.124 00:13:20.124 Get Feature: FDP Events for Placement handle: #0 00:13:20.124 ======================== 00:13:20.124 Number of FDP Events: 6 00:13:20.124 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:20.124 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:20.124 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:20.124 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:20.124 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:20.124 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:20.124 00:13:20.124 FDP events log page 00:13:20.124 =================== 00:13:20.124 Number of FDP events: 1 00:13:20.124 FDP Event #0: 00:13:20.124 Event Type: RU Not Written to Capacity 00:13:20.124 Placement Identifier: Valid 00:13:20.124 NSID: Valid 00:13:20.124 Location: Valid 00:13:20.124 Placement Identifier: 0 00:13:20.124 Event Timestamp: 8 00:13:20.124 Namespace Identifier: 1 00:13:20.124 Reclaim Group Identifier: 0 00:13:20.124 Reclaim Unit Handle Identifier: 0 00:13:20.124 00:13:20.124 FDP test passed 00:13:20.124 00:13:20.124 real 0m0.293s 00:13:20.124 user 0m0.090s 00:13:20.124 sys 0m0.102s 00:13:20.124 12:23:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.124 12:23:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:20.124 ************************************ 00:13:20.124 END TEST nvme_flexible_data_placement 00:13:20.124 ************************************ 00:13:20.384 00:13:20.384 real 0m9.031s 00:13:20.384 user 0m1.502s 00:13:20.384 sys 0m2.575s 00:13:20.384 12:23:43 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.384 12:23:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:20.384 ************************************ 00:13:20.384 END TEST nvme_fdp 00:13:20.384 ************************************ 00:13:20.384 12:23:43 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:20.384 12:23:43 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:20.384 12:23:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:20.384 12:23:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.384 12:23:43 -- common/autotest_common.sh@10 -- # set +x 00:13:20.384 ************************************ 00:13:20.384 START TEST nvme_rpc 00:13:20.384 ************************************ 00:13:20.384 12:23:43 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:20.384 * Looking for test storage... 00:13:20.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:20.384 12:23:43 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:20.643 12:23:43 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:13:20.643 12:23:43 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:20.643 12:23:43 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.643 12:23:43 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:13:20.643 12:23:43 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.643 12:23:43 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:20.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.643 --rc genhtml_branch_coverage=1 00:13:20.643 --rc genhtml_function_coverage=1 00:13:20.643 --rc genhtml_legend=1 00:13:20.643 --rc geninfo_all_blocks=1 00:13:20.643 --rc geninfo_unexecuted_blocks=1 00:13:20.643 00:13:20.643 ' 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:20.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.644 --rc genhtml_branch_coverage=1 00:13:20.644 --rc genhtml_function_coverage=1 00:13:20.644 --rc genhtml_legend=1 00:13:20.644 --rc geninfo_all_blocks=1 00:13:20.644 --rc geninfo_unexecuted_blocks=1 00:13:20.644 00:13:20.644 ' 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:20.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.644 --rc genhtml_branch_coverage=1 00:13:20.644 --rc genhtml_function_coverage=1 00:13:20.644 --rc genhtml_legend=1 00:13:20.644 --rc geninfo_all_blocks=1 00:13:20.644 --rc geninfo_unexecuted_blocks=1 00:13:20.644 00:13:20.644 ' 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:20.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.644 --rc genhtml_branch_coverage=1 00:13:20.644 --rc genhtml_function_coverage=1 00:13:20.644 --rc genhtml_legend=1 00:13:20.644 --rc geninfo_all_blocks=1 00:13:20.644 --rc geninfo_unexecuted_blocks=1 00:13:20.644 00:13:20.644 ' 00:13:20.644 12:23:43 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:20.644 12:23:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:13:20.644 12:23:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:20.644 12:23:43 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67477 00:13:20.644 12:23:43 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:20.644 12:23:43 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:20.644 12:23:43 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67477 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 67477 ']' 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.644 12:23:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.903 [2024-10-07 12:23:44.012913] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:13:20.903 [2024-10-07 12:23:44.013064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67477 ] 00:13:20.903 [2024-10-07 12:23:44.190663] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:21.471 [2024-10-07 12:23:44.465328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.471 [2024-10-07 12:23:44.465371] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.408 12:23:45 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:22.408 12:23:45 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:22.408 12:23:45 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:22.667 Nvme0n1 00:13:22.667 12:23:45 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:22.668 12:23:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:22.668 request: 00:13:22.668 { 00:13:22.668 "bdev_name": "Nvme0n1", 00:13:22.668 "filename": "non_existing_file", 00:13:22.668 "method": "bdev_nvme_apply_firmware", 00:13:22.668 "req_id": 1 00:13:22.668 } 00:13:22.668 Got JSON-RPC error response 00:13:22.668 response: 00:13:22.668 { 00:13:22.668 "code": -32603, 00:13:22.668 "message": "open file failed." 00:13:22.668 } 00:13:22.668 12:23:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:22.668 12:23:45 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:22.668 12:23:45 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:22.926 12:23:46 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:22.926 12:23:46 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67477 00:13:22.926 12:23:46 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 67477 ']' 00:13:22.927 12:23:46 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 67477 00:13:22.927 12:23:46 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:13:22.927 12:23:46 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:22.927 12:23:46 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67477 00:13:22.927 12:23:46 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:22.927 killing process with pid 67477 00:13:22.927 12:23:46 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:22.927 12:23:46 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67477' 00:13:22.927 12:23:46 nvme_rpc -- common/autotest_common.sh@969 -- # kill 67477 00:13:22.927 12:23:46 nvme_rpc -- common/autotest_common.sh@974 -- # wait 67477 00:13:26.216 00:13:26.216 real 0m5.419s 00:13:26.216 user 0m9.370s 00:13:26.216 sys 0m0.999s 00:13:26.216 12:23:48 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.216 12:23:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.216 ************************************ 00:13:26.216 END TEST nvme_rpc 00:13:26.216 ************************************ 00:13:26.216 12:23:49 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:26.216 12:23:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:26.216 12:23:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.216 12:23:49 -- common/autotest_common.sh@10 -- # set +x 00:13:26.216 ************************************ 00:13:26.216 START TEST nvme_rpc_timeouts 00:13:26.216 ************************************ 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:26.216 * Looking for test storage... 00:13:26.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.216 12:23:49 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:26.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.216 --rc genhtml_branch_coverage=1 00:13:26.216 --rc genhtml_function_coverage=1 00:13:26.216 --rc genhtml_legend=1 00:13:26.216 --rc geninfo_all_blocks=1 00:13:26.216 --rc geninfo_unexecuted_blocks=1 00:13:26.216 00:13:26.216 ' 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:26.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.216 --rc genhtml_branch_coverage=1 00:13:26.216 --rc genhtml_function_coverage=1 00:13:26.216 --rc genhtml_legend=1 00:13:26.216 --rc geninfo_all_blocks=1 00:13:26.216 --rc geninfo_unexecuted_blocks=1 00:13:26.216 00:13:26.216 ' 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:26.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.216 --rc genhtml_branch_coverage=1 00:13:26.216 --rc genhtml_function_coverage=1 00:13:26.216 --rc genhtml_legend=1 00:13:26.216 --rc geninfo_all_blocks=1 00:13:26.216 --rc geninfo_unexecuted_blocks=1 00:13:26.216 00:13:26.216 ' 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:26.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.216 --rc genhtml_branch_coverage=1 00:13:26.216 --rc genhtml_function_coverage=1 00:13:26.216 --rc genhtml_legend=1 00:13:26.216 --rc geninfo_all_blocks=1 00:13:26.216 --rc geninfo_unexecuted_blocks=1 00:13:26.216 00:13:26.216 ' 00:13:26.216 12:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:26.216 12:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67564 00:13:26.216 12:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67564 00:13:26.216 12:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67596 00:13:26.216 12:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:26.216 12:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:26.216 12:23:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67596 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 67596 ']' 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:26.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:26.216 12:23:49 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:26.216 [2024-10-07 12:23:49.369839] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:13:26.216 [2024-10-07 12:23:49.369981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67596 ] 00:13:26.475 [2024-10-07 12:23:49.557926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:26.733 [2024-10-07 12:23:49.835260] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.733 [2024-10-07 12:23:49.835300] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.670 Checking default timeout settings: 00:13:27.670 12:23:50 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.670 12:23:50 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:13:27.670 12:23:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:27.670 12:23:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:27.927 Making settings changes with rpc: 00:13:27.927 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:27.927 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:28.184 Check default vs. modified settings: 00:13:28.184 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:28.184 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67564 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67564 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:28.753 Setting action_on_timeout is changed as expected. 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67564 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67564 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:28.753 Setting timeout_us is changed as expected. 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67564 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67564 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:28.753 Setting timeout_admin_us is changed as expected. 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67564 /tmp/settings_modified_67564 00:13:28.753 12:23:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67596 00:13:28.753 12:23:51 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 67596 ']' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 67596 00:13:28.753 12:23:51 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:13:28.753 12:23:51 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67596 00:13:28.753 12:23:51 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:28.753 12:23:51 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:28.753 12:23:51 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67596' 00:13:28.753 killing process with pid 67596 00:13:28.753 12:23:51 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 67596 00:13:28.753 12:23:51 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 67596 00:13:32.043 RPC TIMEOUT SETTING TEST PASSED. 00:13:32.043 12:23:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:32.043 00:13:32.043 real 0m5.721s 00:13:32.043 user 0m10.251s 00:13:32.044 sys 0m1.021s 00:13:32.044 12:23:54 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.044 12:23:54 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:32.044 ************************************ 00:13:32.044 END TEST nvme_rpc_timeouts 00:13:32.044 ************************************ 00:13:32.044 12:23:54 -- spdk/autotest.sh@239 -- # uname -s 00:13:32.044 12:23:54 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:32.044 12:23:54 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:32.044 12:23:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:32.044 12:23:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.044 12:23:54 -- common/autotest_common.sh@10 -- # set +x 00:13:32.044 ************************************ 00:13:32.044 START TEST sw_hotplug 00:13:32.044 ************************************ 00:13:32.044 12:23:54 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:32.044 * Looking for test storage... 00:13:32.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:32.044 12:23:54 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:32.044 12:23:54 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:13:32.044 12:23:54 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:32.044 12:23:55 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.044 12:23:55 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:13:32.044 12:23:55 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.044 12:23:55 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:32.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.044 --rc genhtml_branch_coverage=1 00:13:32.044 --rc genhtml_function_coverage=1 00:13:32.044 --rc genhtml_legend=1 00:13:32.044 --rc geninfo_all_blocks=1 00:13:32.044 --rc geninfo_unexecuted_blocks=1 00:13:32.044 00:13:32.044 ' 00:13:32.044 12:23:55 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:32.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.044 --rc genhtml_branch_coverage=1 00:13:32.044 --rc genhtml_function_coverage=1 00:13:32.044 --rc genhtml_legend=1 00:13:32.044 --rc geninfo_all_blocks=1 00:13:32.044 --rc geninfo_unexecuted_blocks=1 00:13:32.044 00:13:32.044 ' 00:13:32.044 12:23:55 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:32.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.044 --rc genhtml_branch_coverage=1 00:13:32.044 --rc genhtml_function_coverage=1 00:13:32.044 --rc genhtml_legend=1 00:13:32.044 --rc geninfo_all_blocks=1 00:13:32.044 --rc geninfo_unexecuted_blocks=1 00:13:32.044 00:13:32.044 ' 00:13:32.044 12:23:55 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:32.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.044 --rc genhtml_branch_coverage=1 00:13:32.044 --rc genhtml_function_coverage=1 00:13:32.044 --rc genhtml_legend=1 00:13:32.044 --rc geninfo_all_blocks=1 00:13:32.044 --rc geninfo_unexecuted_blocks=1 00:13:32.044 00:13:32.044 ' 00:13:32.044 12:23:55 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:32.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:32.612 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:32.612 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:32.612 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:32.612 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:32.871 12:23:55 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:32.871 12:23:55 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:32.871 12:23:55 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:32.871 12:23:55 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@233 -- # local class 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:13:32.871 12:23:55 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:13:32.872 12:23:55 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:32.872 12:23:55 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:32.872 12:23:55 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:32.872 12:23:55 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:33.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:33.698 Waiting for block devices as requested 00:13:33.698 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:33.698 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:33.959 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:33.959 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:39.228 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:39.228 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:39.228 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:39.796 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:39.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:39.796 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:40.054 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:40.622 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:40.622 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:40.622 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:40.622 12:24:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:40.881 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:40.881 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:40.881 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68489 00:13:40.881 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:40.881 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:40.881 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:40.881 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:40.881 12:24:03 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:40.881 12:24:03 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:40.881 12:24:03 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:40.881 12:24:03 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:40.881 12:24:03 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:13:40.881 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:40.881 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:40.882 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:40.882 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:40.882 12:24:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:40.882 Initializing NVMe Controllers 00:13:41.141 Attaching to 0000:00:10.0 00:13:41.141 Attaching to 0000:00:11.0 00:13:41.141 Attached to 0000:00:11.0 00:13:41.141 Attached to 0000:00:10.0 00:13:41.141 Initialization complete. Starting I/O... 00:13:41.141 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:41.141 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:41.141 00:13:42.079 QEMU NVMe Ctrl (12341 ): 1576 I/Os completed (+1576) 00:13:42.079 QEMU NVMe Ctrl (12340 ): 1576 I/Os completed (+1576) 00:13:42.079 00:13:43.016 QEMU NVMe Ctrl (12341 ): 3728 I/Os completed (+2152) 00:13:43.016 QEMU NVMe Ctrl (12340 ): 3728 I/Os completed (+2152) 00:13:43.016 00:13:43.953 QEMU NVMe Ctrl (12341 ): 5952 I/Os completed (+2224) 00:13:43.953 QEMU NVMe Ctrl (12340 ): 5952 I/Os completed (+2224) 00:13:43.953 00:13:44.890 QEMU NVMe Ctrl (12341 ): 8160 I/Os completed (+2208) 00:13:44.890 QEMU NVMe Ctrl (12340 ): 8160 I/Os completed (+2208) 00:13:44.890 00:13:46.274 QEMU NVMe Ctrl (12341 ): 10352 I/Os completed (+2192) 00:13:46.274 QEMU NVMe Ctrl (12340 ): 10352 I/Os completed (+2192) 00:13:46.274 00:13:46.844 12:24:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:46.844 12:24:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:46.844 12:24:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:46.844 [2024-10-07 12:24:09.956952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:46.844 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:46.844 [2024-10-07 12:24:09.958789] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:09.958853] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:09.958874] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:09.958896] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:46.844 [2024-10-07 12:24:09.961567] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:09.961622] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:09.961641] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:09.961660] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 12:24:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:46.844 12:24:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:46.844 [2024-10-07 12:24:09.998038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:46.844 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:46.844 [2024-10-07 12:24:09.999616] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:09.999664] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:09.999692] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:09.999715] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:46.844 [2024-10-07 12:24:10.002272] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:10.002318] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:10.002339] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 [2024-10-07 12:24:10.002372] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.844 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:46.844 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:46.844 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:46.844 EAL: Scan for (pci) bus failed. 00:13:46.844 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:46.844 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:46.844 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:47.103 00:13:47.103 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:47.103 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:47.103 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:47.103 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:47.103 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:47.103 Attaching to 0000:00:10.0 00:13:47.103 Attached to 0000:00:10.0 00:13:47.103 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:47.103 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:47.103 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:47.103 Attaching to 0000:00:11.0 00:13:47.103 Attached to 0000:00:11.0 00:13:48.042 QEMU NVMe Ctrl (12340 ): 2092 I/Os completed (+2092) 00:13:48.042 QEMU NVMe Ctrl (12341 ): 1868 I/Os completed (+1868) 00:13:48.042 00:13:48.988 QEMU NVMe Ctrl (12340 ): 4321 I/Os completed (+2229) 00:13:48.988 QEMU NVMe Ctrl (12341 ): 4096 I/Os completed (+2228) 00:13:48.988 00:13:49.926 QEMU NVMe Ctrl (12340 ): 6541 I/Os completed (+2220) 00:13:49.927 QEMU NVMe Ctrl (12341 ): 6316 I/Os completed (+2220) 00:13:49.927 00:13:51.307 QEMU NVMe Ctrl (12340 ): 8761 I/Os completed (+2220) 00:13:51.307 QEMU NVMe Ctrl (12341 ): 8536 I/Os completed (+2220) 00:13:51.307 00:13:51.875 QEMU NVMe Ctrl (12340 ): 10981 I/Os completed (+2220) 00:13:51.875 QEMU NVMe Ctrl (12341 ): 10756 I/Os completed (+2220) 00:13:51.875 00:13:53.251 QEMU NVMe Ctrl (12340 ): 13201 I/Os completed (+2220) 00:13:53.251 QEMU NVMe Ctrl (12341 ): 12976 I/Os completed (+2220) 00:13:53.251 00:13:54.188 QEMU NVMe Ctrl (12340 ): 15413 I/Os completed (+2212) 00:13:54.188 QEMU NVMe Ctrl (12341 ): 15188 I/Os completed (+2212) 00:13:54.188 00:13:55.125 QEMU NVMe Ctrl (12340 ): 17641 I/Os completed (+2228) 00:13:55.125 QEMU NVMe Ctrl (12341 ): 17416 I/Os completed (+2228) 00:13:55.125 00:13:56.060 QEMU NVMe Ctrl (12340 ): 19869 I/Os completed (+2228) 00:13:56.060 QEMU NVMe Ctrl (12341 ): 19644 I/Os completed (+2228) 00:13:56.060 00:13:56.996 QEMU NVMe Ctrl (12340 ): 22101 I/Os completed (+2232) 00:13:56.996 QEMU NVMe Ctrl (12341 ): 21876 I/Os completed (+2232) 00:13:56.996 00:13:57.948 QEMU NVMe Ctrl (12340 ): 24329 I/Os completed (+2228) 00:13:57.948 QEMU NVMe Ctrl (12341 ): 24104 I/Os completed (+2228) 00:13:57.948 00:13:58.884 QEMU NVMe Ctrl (12340 ): 26569 I/Os completed (+2240) 00:13:58.884 QEMU NVMe Ctrl (12341 ): 26344 I/Os completed (+2240) 00:13:58.884 00:13:59.144 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:59.144 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:59.144 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:59.144 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:59.144 [2024-10-07 12:24:22.321165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:59.144 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:59.144 [2024-10-07 12:24:22.322837] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.322892] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.322935] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.322957] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:59.144 [2024-10-07 12:24:22.325969] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.326022] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.326040] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.326059] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:59.144 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:59.144 [2024-10-07 12:24:22.360045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:59.144 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:59.144 [2024-10-07 12:24:22.361569] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.361619] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.361645] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.361665] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:59.144 [2024-10-07 12:24:22.364161] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.364205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.364226] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 [2024-10-07 12:24:22.364246] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.144 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:59.144 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:59.403 Attaching to 0000:00:10.0 00:13:59.403 Attached to 0000:00:10.0 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:59.403 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:59.403 Attaching to 0000:00:11.0 00:13:59.403 Attached to 0000:00:11.0 00:13:59.971 QEMU NVMe Ctrl (12340 ): 1272 I/Os completed (+1272) 00:13:59.971 QEMU NVMe Ctrl (12341 ): 1040 I/Os completed (+1040) 00:13:59.971 00:14:00.906 QEMU NVMe Ctrl (12340 ): 3488 I/Os completed (+2216) 00:14:00.906 QEMU NVMe Ctrl (12341 ): 3256 I/Os completed (+2216) 00:14:00.906 00:14:02.282 QEMU NVMe Ctrl (12340 ): 5692 I/Os completed (+2204) 00:14:02.282 QEMU NVMe Ctrl (12341 ): 5460 I/Os completed (+2204) 00:14:02.282 00:14:02.878 QEMU NVMe Ctrl (12340 ): 7904 I/Os completed (+2212) 00:14:02.878 QEMU NVMe Ctrl (12341 ): 7672 I/Os completed (+2212) 00:14:02.878 00:14:04.256 QEMU NVMe Ctrl (12340 ): 10092 I/Os completed (+2188) 00:14:04.256 QEMU NVMe Ctrl (12341 ): 9860 I/Os completed (+2188) 00:14:04.256 00:14:05.218 QEMU NVMe Ctrl (12340 ): 12268 I/Os completed (+2176) 00:14:05.218 QEMU NVMe Ctrl (12341 ): 12038 I/Os completed (+2178) 00:14:05.218 00:14:06.157 QEMU NVMe Ctrl (12340 ): 14472 I/Os completed (+2204) 00:14:06.157 QEMU NVMe Ctrl (12341 ): 14242 I/Os completed (+2204) 00:14:06.157 00:14:07.093 QEMU NVMe Ctrl (12340 ): 16664 I/Os completed (+2192) 00:14:07.093 QEMU NVMe Ctrl (12341 ): 16434 I/Os completed (+2192) 00:14:07.093 00:14:08.032 QEMU NVMe Ctrl (12340 ): 18880 I/Os completed (+2216) 00:14:08.032 QEMU NVMe Ctrl (12341 ): 18650 I/Os completed (+2216) 00:14:08.032 00:14:08.969 QEMU NVMe Ctrl (12340 ): 21092 I/Os completed (+2212) 00:14:08.969 QEMU NVMe Ctrl (12341 ): 20862 I/Os completed (+2212) 00:14:08.969 00:14:09.906 QEMU NVMe Ctrl (12340 ): 23280 I/Os completed (+2188) 00:14:09.906 QEMU NVMe Ctrl (12341 ): 23050 I/Os completed (+2188) 00:14:09.906 00:14:11.286 QEMU NVMe Ctrl (12340 ): 25472 I/Os completed (+2192) 00:14:11.286 QEMU NVMe Ctrl (12341 ): 25242 I/Os completed (+2192) 00:14:11.286 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:11.545 [2024-10-07 12:24:34.682985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:11.545 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:11.545 [2024-10-07 12:24:34.684658] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.684711] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.684735] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.684759] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:11.545 [2024-10-07 12:24:34.687484] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.687538] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.687557] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.687576] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:11.545 [2024-10-07 12:24:34.721817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:11.545 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:11.545 [2024-10-07 12:24:34.723365] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.723431] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.723456] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.723475] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:11.545 [2024-10-07 12:24:34.725968] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.726013] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.726036] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 [2024-10-07 12:24:34.726056] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:11.545 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:11.545 EAL: Scan for (pci) bus failed. 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:11.545 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:11.804 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:11.804 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:11.804 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:11.804 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:11.804 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:11.804 Attaching to 0000:00:10.0 00:14:11.804 Attached to 0000:00:10.0 00:14:11.804 12:24:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:11.804 12:24:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:11.804 12:24:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:11.804 Attaching to 0000:00:11.0 00:14:11.804 Attached to 0000:00:11.0 00:14:11.804 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:11.804 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:11.804 [2024-10-07 12:24:35.040015] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:24.011 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:24.011 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:24.011 12:24:47 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.08 00:14:24.011 12:24:47 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.08 00:14:24.011 12:24:47 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:14:24.011 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.08 00:14:24.011 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.08 2 00:14:24.011 remove_attach_helper took 43.08s to complete (handling 2 nvme drive(s)) 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:30.576 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68489 00:14:30.576 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68489) - No such process 00:14:30.576 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68489 00:14:30.576 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:30.576 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:30.576 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:30.576 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69037 00:14:30.576 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:30.576 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:30.576 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69037 00:14:30.576 12:24:53 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 69037 ']' 00:14:30.576 12:24:53 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.576 12:24:53 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.576 12:24:53 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.576 12:24:53 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.576 12:24:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.576 [2024-10-07 12:24:53.147826] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:14:30.576 [2024-10-07 12:24:53.147968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69037 ] 00:14:30.576 [2024-10-07 12:24:53.316691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.576 [2024-10-07 12:24:53.514023] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.144 12:24:54 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.144 12:24:54 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:14:31.144 12:24:54 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:31.144 12:24:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.144 12:24:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:31.144 12:24:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.144 12:24:54 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:31.144 12:24:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:31.144 12:24:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:31.144 12:24:54 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:14:31.144 12:24:54 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:14:31.144 12:24:54 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:14:31.144 12:24:54 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:14:31.144 12:24:54 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:14:31.144 12:24:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:31.144 12:24:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:31.144 12:24:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:31.144 12:24:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:31.144 12:24:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:37.711 [2024-10-07 12:25:00.445324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:37.711 [2024-10-07 12:25:00.447975] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.711 [2024-10-07 12:25:00.448037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:37.711 [2024-10-07 12:25:00.448061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.711 [2024-10-07 12:25:00.448129] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.711 [2024-10-07 12:25:00.448144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.711 [2024-10-07 12:25:00.448159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.711 [2024-10-07 12:25:00.448172] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.711 [2024-10-07 12:25:00.448188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.711 [2024-10-07 12:25:00.448200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.711 [2024-10-07 12:25:00.448218] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.711 [2024-10-07 12:25:00.448229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.711 [2024-10-07 12:25:00.448244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:37.711 12:25:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.711 12:25:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:37.711 12:25:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:37.711 [2024-10-07 12:25:00.844628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:37.711 [2024-10-07 12:25:00.847132] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.711 [2024-10-07 12:25:00.847176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.711 [2024-10-07 12:25:00.847211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.711 [2024-10-07 12:25:00.847233] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.711 [2024-10-07 12:25:00.847247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.711 [2024-10-07 12:25:00.847259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.711 [2024-10-07 12:25:00.847274] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.711 [2024-10-07 12:25:00.847285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.711 [2024-10-07 12:25:00.847299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.711 [2024-10-07 12:25:00.847311] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.711 [2024-10-07 12:25:00.847324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.711 [2024-10-07 12:25:00.847336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:37.711 12:25:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:37.711 12:25:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.711 12:25:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:37.970 12:25:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.971 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:37.971 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:37.971 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:37.971 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:37.971 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:38.229 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:38.229 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:38.229 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:38.229 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:38.229 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:38.229 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:38.229 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:38.229 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:50.440 12:25:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.440 12:25:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:50.440 12:25:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:50.440 12:25:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.440 12:25:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:50.440 [2024-10-07 12:25:13.524258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:50.440 [2024-10-07 12:25:13.526547] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:50.440 [2024-10-07 12:25:13.526595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.440 [2024-10-07 12:25:13.526611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.440 [2024-10-07 12:25:13.526638] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:50.440 [2024-10-07 12:25:13.526650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.440 [2024-10-07 12:25:13.526664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.440 [2024-10-07 12:25:13.526678] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:50.440 [2024-10-07 12:25:13.526691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.440 [2024-10-07 12:25:13.526703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.440 [2024-10-07 12:25:13.526717] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:50.440 [2024-10-07 12:25:13.526728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.440 [2024-10-07 12:25:13.526742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.440 12:25:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:50.440 12:25:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:51.008 [2024-10-07 12:25:14.023442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:51.008 [2024-10-07 12:25:14.025615] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.008 [2024-10-07 12:25:14.025655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.008 [2024-10-07 12:25:14.025677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.008 [2024-10-07 12:25:14.025713] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.008 [2024-10-07 12:25:14.025728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.008 [2024-10-07 12:25:14.025740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.008 [2024-10-07 12:25:14.025755] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.008 [2024-10-07 12:25:14.025766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.008 [2024-10-07 12:25:14.025780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.008 [2024-10-07 12:25:14.025793] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.008 [2024-10-07 12:25:14.025807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.008 [2024-10-07 12:25:14.025819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:51.008 12:25:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.008 12:25:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:51.008 12:25:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:51.008 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:51.267 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:51.267 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:51.267 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:51.267 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:51.267 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:51.267 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:51.267 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:51.267 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:03.477 12:25:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.477 12:25:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.477 12:25:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:03.477 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.477 12:25:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.477 12:25:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.477 [2024-10-07 12:25:26.603204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:03.477 [2024-10-07 12:25:26.605503] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.477 [2024-10-07 12:25:26.605546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.477 [2024-10-07 12:25:26.605562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.477 [2024-10-07 12:25:26.605588] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.477 [2024-10-07 12:25:26.605600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.477 [2024-10-07 12:25:26.605618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.477 [2024-10-07 12:25:26.605631] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.477 [2024-10-07 12:25:26.605644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.477 [2024-10-07 12:25:26.605656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.477 [2024-10-07 12:25:26.605671] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.477 [2024-10-07 12:25:26.605682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.477 [2024-10-07 12:25:26.605696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.478 12:25:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.478 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:03.478 12:25:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:03.737 [2024-10-07 12:25:27.002565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:03.737 [2024-10-07 12:25:27.005001] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.737 [2024-10-07 12:25:27.005050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.737 [2024-10-07 12:25:27.005085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.737 [2024-10-07 12:25:27.005109] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.737 [2024-10-07 12:25:27.005123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.737 [2024-10-07 12:25:27.005134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.737 [2024-10-07 12:25:27.005150] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.737 [2024-10-07 12:25:27.005161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.737 [2024-10-07 12:25:27.005177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.737 [2024-10-07 12:25:27.005190] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.737 [2024-10-07 12:25:27.005203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.737 [2024-10-07 12:25:27.005214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.996 12:25:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.996 12:25:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.996 12:25:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:03.996 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:04.311 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:04.311 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:04.311 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:04.311 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:04.311 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:04.311 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:04.311 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:04.311 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.19 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.19 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:15:16.531 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.531 12:25:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.531 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:16.532 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:16.532 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:16.532 12:25:39 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:15:16.532 12:25:39 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:15:16.532 12:25:39 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:15:16.532 12:25:39 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:15:16.532 12:25:39 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:15:16.532 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:16.532 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:16.532 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:16.532 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:16.532 12:25:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.120 12:25:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.120 12:25:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.120 [2024-10-07 12:25:45.670503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:23.120 [2024-10-07 12:25:45.672692] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.120 [2024-10-07 12:25:45.672735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.120 [2024-10-07 12:25:45.672752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.120 [2024-10-07 12:25:45.672778] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.120 [2024-10-07 12:25:45.672791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.120 [2024-10-07 12:25:45.672805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.120 [2024-10-07 12:25:45.672817] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.120 [2024-10-07 12:25:45.672831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.120 [2024-10-07 12:25:45.672842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.120 [2024-10-07 12:25:45.672857] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.120 [2024-10-07 12:25:45.672868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.120 [2024-10-07 12:25:45.672885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.120 [2024-10-07 12:25:45.672932] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:15:23.120 [2024-10-07 12:25:45.672958] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:15:23.120 [2024-10-07 12:25:45.672969] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:15:23.120 [2024-10-07 12:25:45.672982] bdev_nvme.c:5390:aer_cb: *WARNING*: AER request execute failed 00:15:23.120 12:25:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:23.120 12:25:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:23.120 [2024-10-07 12:25:46.069857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:23.121 [2024-10-07 12:25:46.072098] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.121 [2024-10-07 12:25:46.072146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.121 [2024-10-07 12:25:46.072176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.121 [2024-10-07 12:25:46.072215] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.121 [2024-10-07 12:25:46.072229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.121 [2024-10-07 12:25:46.072241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.121 [2024-10-07 12:25:46.072257] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.121 [2024-10-07 12:25:46.072267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.121 [2024-10-07 12:25:46.072282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.121 [2024-10-07 12:25:46.072296] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.121 [2024-10-07 12:25:46.072312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.121 [2024-10-07 12:25:46.072324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.121 12:25:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.121 12:25:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.121 12:25:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:23.121 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:23.379 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:23.379 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:23.379 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:23.379 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:23.379 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:23.379 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:23.379 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:23.379 12:25:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:35.589 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:35.589 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:35.589 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:35.589 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:35.589 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:35.589 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:35.589 12:25:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.589 12:25:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:35.589 12:25:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.589 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:35.589 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:35.589 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:35.589 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:35.589 [2024-10-07 12:25:58.649623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:35.589 [2024-10-07 12:25:58.651514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.589 [2024-10-07 12:25:58.651567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.589 [2024-10-07 12:25:58.651584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.589 [2024-10-07 12:25:58.651609] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.589 [2024-10-07 12:25:58.651621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.589 [2024-10-07 12:25:58.651636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.589 [2024-10-07 12:25:58.651649] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.589 [2024-10-07 12:25:58.651663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.589 [2024-10-07 12:25:58.651674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.589 [2024-10-07 12:25:58.651689] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.589 [2024-10-07 12:25:58.651700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.589 [2024-10-07 12:25:58.651715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.590 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:35.590 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:35.590 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:35.590 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:35.590 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:35.590 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:35.590 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:35.590 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:35.590 12:25:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.590 12:25:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:35.590 12:25:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.590 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:35.590 12:25:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:36.158 [2024-10-07 12:25:59.148829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:36.158 [2024-10-07 12:25:59.151160] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.158 [2024-10-07 12:25:59.151205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.158 [2024-10-07 12:25:59.151240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.158 [2024-10-07 12:25:59.151262] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.158 [2024-10-07 12:25:59.151276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.158 [2024-10-07 12:25:59.151288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.158 [2024-10-07 12:25:59.151303] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.158 [2024-10-07 12:25:59.151315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.158 [2024-10-07 12:25:59.151329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.158 [2024-10-07 12:25:59.151342] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.158 [2024-10-07 12:25:59.151356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.158 [2024-10-07 12:25:59.151367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:36.158 12:25:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.158 12:25:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:36.158 12:25:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:36.158 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:36.417 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:36.417 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:36.417 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:36.417 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:36.417 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:36.417 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:36.417 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:36.417 12:25:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:48.621 12:26:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.621 12:26:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:48.621 12:26:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:48.621 [2024-10-07 12:26:11.728571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:48.621 [2024-10-07 12:26:11.730440] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.621 [2024-10-07 12:26:11.730594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.621 [2024-10-07 12:26:11.730736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.621 [2024-10-07 12:26:11.730810] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.621 [2024-10-07 12:26:11.730892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.621 [2024-10-07 12:26:11.730978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.621 [2024-10-07 12:26:11.731080] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.621 [2024-10-07 12:26:11.731170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.621 [2024-10-07 12:26:11.731270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.621 [2024-10-07 12:26:11.731293] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.621 [2024-10-07 12:26:11.731305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.621 [2024-10-07 12:26:11.731319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:48.621 12:26:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.621 12:26:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:48.621 12:26:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:48.621 12:26:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:48.880 [2024-10-07 12:26:12.127930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:48.880 [2024-10-07 12:26:12.130178] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.880 [2024-10-07 12:26:12.130217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.880 [2024-10-07 12:26:12.130234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.880 [2024-10-07 12:26:12.130253] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.880 [2024-10-07 12:26:12.130267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.880 [2024-10-07 12:26:12.130278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.880 [2024-10-07 12:26:12.130297] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.880 [2024-10-07 12:26:12.130308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.880 [2024-10-07 12:26:12.130324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.880 [2024-10-07 12:26:12.130336] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.880 [2024-10-07 12:26:12.130350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.880 [2024-10-07 12:26:12.130361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.140 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:49.140 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:49.140 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:49.140 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.140 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.140 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.140 12:26:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.140 12:26:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.140 12:26:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.140 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:49.140 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:49.399 12:26:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.11 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.11 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.11 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.11 2 00:16:01.611 remove_attach_helper took 45.11s to complete (handling 2 nvme drive(s)) 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:01.611 12:26:24 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69037 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 69037 ']' 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 69037 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69037 00:16:01.611 killing process with pid 69037 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69037' 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@969 -- # kill 69037 00:16:01.611 12:26:24 sw_hotplug -- common/autotest_common.sh@974 -- # wait 69037 00:16:04.144 12:26:27 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:04.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:05.285 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:05.285 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:05.285 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:05.285 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:05.285 00:16:05.285 real 2m33.706s 00:16:05.285 user 1m51.124s 00:16:05.285 sys 0m22.582s 00:16:05.285 12:26:28 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.285 ************************************ 00:16:05.285 END TEST sw_hotplug 00:16:05.285 ************************************ 00:16:05.285 12:26:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:05.545 12:26:28 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:05.545 12:26:28 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:05.545 12:26:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:05.545 12:26:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.545 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:16:05.545 ************************************ 00:16:05.545 START TEST nvme_xnvme 00:16:05.545 ************************************ 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:05.545 * Looking for test storage... 00:16:05.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:05.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.545 --rc genhtml_branch_coverage=1 00:16:05.545 --rc genhtml_function_coverage=1 00:16:05.545 --rc genhtml_legend=1 00:16:05.545 --rc geninfo_all_blocks=1 00:16:05.545 --rc geninfo_unexecuted_blocks=1 00:16:05.545 00:16:05.545 ' 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:05.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.545 --rc genhtml_branch_coverage=1 00:16:05.545 --rc genhtml_function_coverage=1 00:16:05.545 --rc genhtml_legend=1 00:16:05.545 --rc geninfo_all_blocks=1 00:16:05.545 --rc geninfo_unexecuted_blocks=1 00:16:05.545 00:16:05.545 ' 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:05.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.545 --rc genhtml_branch_coverage=1 00:16:05.545 --rc genhtml_function_coverage=1 00:16:05.545 --rc genhtml_legend=1 00:16:05.545 --rc geninfo_all_blocks=1 00:16:05.545 --rc geninfo_unexecuted_blocks=1 00:16:05.545 00:16:05.545 ' 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:05.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.545 --rc genhtml_branch_coverage=1 00:16:05.545 --rc genhtml_function_coverage=1 00:16:05.545 --rc genhtml_legend=1 00:16:05.545 --rc geninfo_all_blocks=1 00:16:05.545 --rc geninfo_unexecuted_blocks=1 00:16:05.545 00:16:05.545 ' 00:16:05.545 12:26:28 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.545 12:26:28 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.545 12:26:28 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.545 12:26:28 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.545 12:26:28 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.545 12:26:28 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:05.545 12:26:28 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.545 12:26:28 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:16:05.545 12:26:28 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:05.805 12:26:28 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.805 12:26:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:05.805 ************************************ 00:16:05.805 START TEST xnvme_to_malloc_dd_copy 00:16:05.805 ************************************ 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:05.805 12:26:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:05.805 { 00:16:05.805 "subsystems": [ 00:16:05.805 { 00:16:05.805 "subsystem": "bdev", 00:16:05.805 "config": [ 00:16:05.805 { 00:16:05.805 "params": { 00:16:05.805 "block_size": 512, 00:16:05.805 "num_blocks": 2097152, 00:16:05.805 "name": "malloc0" 00:16:05.805 }, 00:16:05.805 "method": "bdev_malloc_create" 00:16:05.805 }, 00:16:05.805 { 00:16:05.805 "params": { 00:16:05.805 "io_mechanism": "libaio", 00:16:05.805 "filename": "/dev/nullb0", 00:16:05.805 "name": "null0" 00:16:05.805 }, 00:16:05.805 "method": "bdev_xnvme_create" 00:16:05.805 }, 00:16:05.805 { 00:16:05.805 "method": "bdev_wait_for_examine" 00:16:05.805 } 00:16:05.805 ] 00:16:05.805 } 00:16:05.805 ] 00:16:05.805 } 00:16:05.805 [2024-10-07 12:26:28.972404] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:05.805 [2024-10-07 12:26:28.972527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70405 ] 00:16:06.064 [2024-10-07 12:26:29.144578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.064 [2024-10-07 12:26:29.348286] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.592  [2024-10-07T12:26:32.818Z] Copying: 269/1024 [MB] (269 MBps) [2024-10-07T12:26:33.784Z] Copying: 540/1024 [MB] (271 MBps) [2024-10-07T12:26:34.720Z] Copying: 812/1024 [MB] (271 MBps) [2024-10-07T12:26:38.908Z] Copying: 1024/1024 [MB] (average 271 MBps) 00:16:15.617 00:16:15.617 12:26:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:15.617 12:26:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:15.617 12:26:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:15.617 12:26:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:15.617 { 00:16:15.617 "subsystems": [ 00:16:15.617 { 00:16:15.617 "subsystem": "bdev", 00:16:15.617 "config": [ 00:16:15.617 { 00:16:15.617 "params": { 00:16:15.617 "block_size": 512, 00:16:15.617 "num_blocks": 2097152, 00:16:15.617 "name": "malloc0" 00:16:15.617 }, 00:16:15.617 "method": "bdev_malloc_create" 00:16:15.617 }, 00:16:15.617 { 00:16:15.617 "params": { 00:16:15.617 "io_mechanism": "libaio", 00:16:15.617 "filename": "/dev/nullb0", 00:16:15.617 "name": "null0" 00:16:15.617 }, 00:16:15.617 "method": "bdev_xnvme_create" 00:16:15.617 }, 00:16:15.617 { 00:16:15.617 "method": "bdev_wait_for_examine" 00:16:15.617 } 00:16:15.617 ] 00:16:15.617 } 00:16:15.617 ] 00:16:15.617 } 00:16:15.617 [2024-10-07 12:26:38.540118] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:15.617 [2024-10-07 12:26:38.540286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70514 ] 00:16:15.617 [2024-10-07 12:26:38.708070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.617 [2024-10-07 12:26:38.900671] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.195  [2024-10-07T12:26:42.424Z] Copying: 275/1024 [MB] (275 MBps) [2024-10-07T12:26:43.362Z] Copying: 551/1024 [MB] (275 MBps) [2024-10-07T12:26:44.299Z] Copying: 823/1024 [MB] (272 MBps) [2024-10-07T12:26:48.494Z] Copying: 1024/1024 [MB] (average 274 MBps) 00:16:25.203 00:16:25.203 12:26:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:16:25.203 12:26:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:25.203 12:26:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:16:25.203 12:26:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:16:25.203 12:26:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:25.203 12:26:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:25.203 { 00:16:25.203 "subsystems": [ 00:16:25.203 { 00:16:25.204 "subsystem": "bdev", 00:16:25.204 "config": [ 00:16:25.204 { 00:16:25.204 "params": { 00:16:25.204 "block_size": 512, 00:16:25.204 "num_blocks": 2097152, 00:16:25.204 "name": "malloc0" 00:16:25.204 }, 00:16:25.204 "method": "bdev_malloc_create" 00:16:25.204 }, 00:16:25.204 { 00:16:25.204 "params": { 00:16:25.204 "io_mechanism": "io_uring", 00:16:25.204 "filename": "/dev/nullb0", 00:16:25.204 "name": "null0" 00:16:25.204 }, 00:16:25.204 "method": "bdev_xnvme_create" 00:16:25.204 }, 00:16:25.204 { 00:16:25.204 "method": "bdev_wait_for_examine" 00:16:25.204 } 00:16:25.204 ] 00:16:25.204 } 00:16:25.204 ] 00:16:25.204 } 00:16:25.204 [2024-10-07 12:26:48.058209] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:25.204 [2024-10-07 12:26:48.058323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70618 ] 00:16:25.204 [2024-10-07 12:26:48.229686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.204 [2024-10-07 12:26:48.427923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.742  [2024-10-07T12:26:51.969Z] Copying: 283/1024 [MB] (283 MBps) [2024-10-07T12:26:52.906Z] Copying: 564/1024 [MB] (280 MBps) [2024-10-07T12:26:53.475Z] Copying: 845/1024 [MB] (281 MBps) [2024-10-07T12:26:57.666Z] Copying: 1024/1024 [MB] (average 282 MBps) 00:16:34.375 00:16:34.375 12:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:34.375 12:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:34.375 12:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:34.375 12:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:34.375 { 00:16:34.375 "subsystems": [ 00:16:34.375 { 00:16:34.375 "subsystem": "bdev", 00:16:34.375 "config": [ 00:16:34.375 { 00:16:34.375 "params": { 00:16:34.375 "block_size": 512, 00:16:34.375 "num_blocks": 2097152, 00:16:34.375 "name": "malloc0" 00:16:34.375 }, 00:16:34.375 "method": "bdev_malloc_create" 00:16:34.375 }, 00:16:34.375 { 00:16:34.375 "params": { 00:16:34.375 "io_mechanism": "io_uring", 00:16:34.375 "filename": "/dev/nullb0", 00:16:34.375 "name": "null0" 00:16:34.375 }, 00:16:34.375 "method": "bdev_xnvme_create" 00:16:34.375 }, 00:16:34.375 { 00:16:34.375 "method": "bdev_wait_for_examine" 00:16:34.375 } 00:16:34.375 ] 00:16:34.375 } 00:16:34.375 ] 00:16:34.375 } 00:16:34.375 [2024-10-07 12:26:57.497562] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:34.375 [2024-10-07 12:26:57.497691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70727 ] 00:16:34.633 [2024-10-07 12:26:57.670751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.633 [2024-10-07 12:26:57.867005] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.167  [2024-10-07T12:27:01.395Z] Copying: 285/1024 [MB] (285 MBps) [2024-10-07T12:27:02.332Z] Copying: 564/1024 [MB] (279 MBps) [2024-10-07T12:27:02.900Z] Copying: 844/1024 [MB] (279 MBps) [2024-10-07T12:27:07.186Z] Copying: 1024/1024 [MB] (average 281 MBps) 00:16:43.895 00:16:43.895 12:27:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:16:43.895 12:27:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:16:43.895 00:16:43.895 real 0m38.024s 00:16:43.895 user 0m33.236s 00:16:43.895 sys 0m4.252s 00:16:43.895 12:27:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.895 12:27:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:43.895 ************************************ 00:16:43.895 END TEST xnvme_to_malloc_dd_copy 00:16:43.895 ************************************ 00:16:43.895 12:27:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:43.895 12:27:06 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:43.895 12:27:06 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.895 12:27:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.895 ************************************ 00:16:43.895 START TEST xnvme_bdevperf 00:16:43.895 ************************************ 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:43.895 12:27:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:43.895 { 00:16:43.895 "subsystems": [ 00:16:43.895 { 00:16:43.895 "subsystem": "bdev", 00:16:43.895 "config": [ 00:16:43.895 { 00:16:43.895 "params": { 00:16:43.895 "io_mechanism": "libaio", 00:16:43.895 "filename": "/dev/nullb0", 00:16:43.895 "name": "null0" 00:16:43.895 }, 00:16:43.895 "method": "bdev_xnvme_create" 00:16:43.895 }, 00:16:43.895 { 00:16:43.895 "method": "bdev_wait_for_examine" 00:16:43.895 } 00:16:43.895 ] 00:16:43.895 } 00:16:43.895 ] 00:16:43.895 } 00:16:43.895 [2024-10-07 12:27:07.070408] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:43.895 [2024-10-07 12:27:07.070533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70859 ] 00:16:44.173 [2024-10-07 12:27:07.241155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.173 [2024-10-07 12:27:07.436706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.741 Running I/O for 5 seconds... 00:16:46.624 160448.00 IOPS, 626.75 MiB/s [2024-10-07T12:27:10.852Z] 160992.00 IOPS, 628.88 MiB/s [2024-10-07T12:27:11.787Z] 161322.67 IOPS, 630.17 MiB/s [2024-10-07T12:27:13.162Z] 161984.00 IOPS, 632.75 MiB/s 00:16:49.871 Latency(us) 00:16:49.871 [2024-10-07T12:27:13.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.871 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:49.871 null0 : 5.00 161981.42 632.74 0.00 0.00 392.80 127.49 1658.14 00:16:49.871 [2024-10-07T12:27:13.162Z] =================================================================================================================== 00:16:49.871 [2024-10-07T12:27:13.163Z] Total : 161981.42 632.74 0.00 0.00 392.80 127.49 1658.14 00:16:50.808 12:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:50.808 12:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:50.808 12:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:50.808 12:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:50.808 12:27:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:50.808 12:27:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:50.808 { 00:16:50.808 "subsystems": [ 00:16:50.808 { 00:16:50.808 "subsystem": "bdev", 00:16:50.808 "config": [ 00:16:50.808 { 00:16:50.808 "params": { 00:16:50.808 "io_mechanism": "io_uring", 00:16:50.808 "filename": "/dev/nullb0", 00:16:50.808 "name": "null0" 00:16:50.808 }, 00:16:50.808 "method": "bdev_xnvme_create" 00:16:50.808 }, 00:16:50.808 { 00:16:50.808 "method": "bdev_wait_for_examine" 00:16:50.808 } 00:16:50.808 ] 00:16:50.808 } 00:16:50.808 ] 00:16:50.808 } 00:16:51.067 [2024-10-07 12:27:14.124746] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:51.067 [2024-10-07 12:27:14.124912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70940 ] 00:16:51.067 [2024-10-07 12:27:14.289562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.325 [2024-10-07 12:27:14.492182] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.586 Running I/O for 5 seconds... 00:16:53.540 208704.00 IOPS, 815.25 MiB/s [2024-10-07T12:27:18.209Z] 209280.00 IOPS, 817.50 MiB/s [2024-10-07T12:27:19.145Z] 209301.33 IOPS, 817.58 MiB/s [2024-10-07T12:27:20.081Z] 208864.00 IOPS, 815.88 MiB/s 00:16:56.790 Latency(us) 00:16:56.790 [2024-10-07T12:27:20.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.790 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:56.790 null0 : 5.00 208678.67 815.15 0.00 0.00 304.47 189.17 1710.78 00:16:56.790 [2024-10-07T12:27:20.081Z] =================================================================================================================== 00:16:56.790 [2024-10-07T12:27:20.081Z] Total : 208678.67 815.15 0.00 0.00 304.47 189.17 1710.78 00:16:58.165 12:27:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:16:58.165 12:27:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:16:58.165 00:16:58.165 real 0m14.157s 00:16:58.165 user 0m10.715s 00:16:58.165 sys 0m3.235s 00:16:58.165 12:27:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:58.165 12:27:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:58.165 ************************************ 00:16:58.165 END TEST xnvme_bdevperf 00:16:58.165 ************************************ 00:16:58.165 00:16:58.165 real 0m52.552s 00:16:58.165 user 0m44.121s 00:16:58.165 sys 0m7.701s 00:16:58.165 12:27:21 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:58.165 12:27:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.165 ************************************ 00:16:58.165 END TEST nvme_xnvme 00:16:58.165 ************************************ 00:16:58.165 12:27:21 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:58.165 12:27:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:58.165 12:27:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:58.165 12:27:21 -- common/autotest_common.sh@10 -- # set +x 00:16:58.165 ************************************ 00:16:58.165 START TEST blockdev_xnvme 00:16:58.165 ************************************ 00:16:58.165 12:27:21 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:58.165 * Looking for test storage... 00:16:58.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:58.165 12:27:21 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:58.165 12:27:21 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:16:58.165 12:27:21 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:58.165 12:27:21 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:58.165 12:27:21 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.165 12:27:21 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.165 12:27:21 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.165 12:27:21 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.165 12:27:21 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.165 12:27:21 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.165 12:27:21 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.166 12:27:21 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:58.166 12:27:21 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.166 12:27:21 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:58.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.166 --rc genhtml_branch_coverage=1 00:16:58.166 --rc genhtml_function_coverage=1 00:16:58.166 --rc genhtml_legend=1 00:16:58.166 --rc geninfo_all_blocks=1 00:16:58.166 --rc geninfo_unexecuted_blocks=1 00:16:58.166 00:16:58.166 ' 00:16:58.166 12:27:21 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:58.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.166 --rc genhtml_branch_coverage=1 00:16:58.166 --rc genhtml_function_coverage=1 00:16:58.166 --rc genhtml_legend=1 00:16:58.166 --rc geninfo_all_blocks=1 00:16:58.166 --rc geninfo_unexecuted_blocks=1 00:16:58.166 00:16:58.166 ' 00:16:58.166 12:27:21 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:58.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.166 --rc genhtml_branch_coverage=1 00:16:58.166 --rc genhtml_function_coverage=1 00:16:58.166 --rc genhtml_legend=1 00:16:58.166 --rc geninfo_all_blocks=1 00:16:58.166 --rc geninfo_unexecuted_blocks=1 00:16:58.166 00:16:58.166 ' 00:16:58.166 12:27:21 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:58.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.166 --rc genhtml_branch_coverage=1 00:16:58.166 --rc genhtml_function_coverage=1 00:16:58.166 --rc genhtml_legend=1 00:16:58.166 --rc geninfo_all_blocks=1 00:16:58.166 --rc geninfo_unexecuted_blocks=1 00:16:58.166 00:16:58.166 ' 00:16:58.166 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:58.166 12:27:21 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:58.166 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:58.166 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:58.166 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:58.166 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:58.166 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:58.166 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:58.166 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71095 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:58.424 12:27:21 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71095 00:16:58.424 12:27:21 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 71095 ']' 00:16:58.424 12:27:21 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.424 12:27:21 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.424 12:27:21 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.424 12:27:21 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.424 12:27:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.424 [2024-10-07 12:27:21.584663] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:58.424 [2024-10-07 12:27:21.584779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71095 ] 00:16:58.682 [2024-10-07 12:27:21.754242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.682 [2024-10-07 12:27:21.956240] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.619 12:27:22 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.619 12:27:22 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:16:59.619 12:27:22 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:59.619 12:27:22 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:16:59.619 12:27:22 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:59.619 12:27:22 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:59.619 12:27:22 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:00.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:00.460 Waiting for block devices as requested 00:17:00.460 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:00.746 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:00.746 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:00.746 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:06.021 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:06.021 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:17:06.021 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:17:06.021 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.022 nvme0n1 00:17:06.022 nvme1n1 00:17:06.022 nvme2n1 00:17:06.022 nvme2n2 00:17:06.022 nvme2n3 00:17:06.022 nvme3n1 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.022 12:27:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.022 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:06.282 12:27:29 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.282 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:06.282 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:06.283 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "6f8b8164-fb30-4a98-804f-2125111ce56d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6f8b8164-fb30-4a98-804f-2125111ce56d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "738c8c77-4a4f-42ef-9768-c4fcdda2ba93"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "738c8c77-4a4f-42ef-9768-c4fcdda2ba93",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "34dae3a6-8493-4ed8-ad85-e6e57f7b3fe1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "34dae3a6-8493-4ed8-ad85-e6e57f7b3fe1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "78ee0601-4d24-4e67-9af2-d04c02b0d51d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78ee0601-4d24-4e67-9af2-d04c02b0d51d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "4a542ac7-4d0b-482b-bb5f-6367973933a7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4a542ac7-4d0b-482b-bb5f-6367973933a7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b6bd39b2-5bac-4763-a772-57bd2fd643a0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b6bd39b2-5bac-4763-a772-57bd2fd643a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:06.283 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:06.283 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:17:06.283 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:06.283 12:27:29 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71095 00:17:06.283 12:27:29 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 71095 ']' 00:17:06.283 12:27:29 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 71095 00:17:06.283 12:27:29 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:17:06.283 12:27:29 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:06.283 12:27:29 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71095 00:17:06.283 killing process with pid 71095 00:17:06.283 12:27:29 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:06.283 12:27:29 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:06.283 12:27:29 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71095' 00:17:06.283 12:27:29 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 71095 00:17:06.283 12:27:29 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 71095 00:17:08.820 12:27:31 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:08.820 12:27:31 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:08.820 12:27:31 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:08.820 12:27:31 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.820 12:27:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:08.820 ************************************ 00:17:08.820 START TEST bdev_hello_world 00:17:08.820 ************************************ 00:17:08.820 12:27:31 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:08.820 [2024-10-07 12:27:32.048064] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:08.820 [2024-10-07 12:27:32.048437] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71476 ] 00:17:09.080 [2024-10-07 12:27:32.219678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.339 [2024-10-07 12:27:32.419701] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.598 [2024-10-07 12:27:32.837919] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:09.598 [2024-10-07 12:27:32.837968] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:09.598 [2024-10-07 12:27:32.837987] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:09.598 [2024-10-07 12:27:32.840085] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:09.598 [2024-10-07 12:27:32.840493] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:09.598 [2024-10-07 12:27:32.840516] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:09.598 [2024-10-07 12:27:32.840879] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:09.598 00:17:09.598 [2024-10-07 12:27:32.840916] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:11.006 00:17:11.006 real 0m2.109s 00:17:11.006 user 0m1.731s 00:17:11.006 sys 0m0.261s 00:17:11.006 12:27:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.006 ************************************ 00:17:11.006 END TEST bdev_hello_world 00:17:11.006 ************************************ 00:17:11.006 12:27:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:11.006 12:27:34 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:11.006 12:27:34 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:11.006 12:27:34 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.006 12:27:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.006 ************************************ 00:17:11.006 START TEST bdev_bounds 00:17:11.006 ************************************ 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:11.006 Process bdevio pid: 71518 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71518 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71518' 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71518 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71518 ']' 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:11.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:11.006 12:27:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:11.006 [2024-10-07 12:27:34.232379] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:11.006 [2024-10-07 12:27:34.232505] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71518 ] 00:17:11.267 [2024-10-07 12:27:34.405052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:11.525 [2024-10-07 12:27:34.620991] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.525 [2024-10-07 12:27:34.621106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.525 [2024-10-07 12:27:34.621137] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.092 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:12.092 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:17:12.092 12:27:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:12.092 I/O targets: 00:17:12.092 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:12.092 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:12.092 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:12.092 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:12.092 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:12.092 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:12.092 00:17:12.092 00:17:12.092 CUnit - A unit testing framework for C - Version 2.1-3 00:17:12.092 http://cunit.sourceforge.net/ 00:17:12.093 00:17:12.093 00:17:12.093 Suite: bdevio tests on: nvme3n1 00:17:12.093 Test: blockdev write read block ...passed 00:17:12.093 Test: blockdev write zeroes read block ...passed 00:17:12.093 Test: blockdev write zeroes read no split ...passed 00:17:12.093 Test: blockdev write zeroes read split ...passed 00:17:12.093 Test: blockdev write zeroes read split partial ...passed 00:17:12.093 Test: blockdev reset ...passed 00:17:12.093 Test: blockdev write read 8 blocks ...passed 00:17:12.093 Test: blockdev write read size > 128k ...passed 00:17:12.093 Test: blockdev write read invalid size ...passed 00:17:12.093 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:12.093 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:12.093 Test: blockdev write read max offset ...passed 00:17:12.093 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:12.093 Test: blockdev writev readv 8 blocks ...passed 00:17:12.093 Test: blockdev writev readv 30 x 1block ...passed 00:17:12.093 Test: blockdev writev readv block ...passed 00:17:12.093 Test: blockdev writev readv size > 128k ...passed 00:17:12.093 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:12.093 Test: blockdev comparev and writev ...passed 00:17:12.093 Test: blockdev nvme passthru rw ...passed 00:17:12.093 Test: blockdev nvme passthru vendor specific ...passed 00:17:12.093 Test: blockdev nvme admin passthru ...passed 00:17:12.093 Test: blockdev copy ...passed 00:17:12.093 Suite: bdevio tests on: nvme2n3 00:17:12.093 Test: blockdev write read block ...passed 00:17:12.093 Test: blockdev write zeroes read block ...passed 00:17:12.093 Test: blockdev write zeroes read no split ...passed 00:17:12.093 Test: blockdev write zeroes read split ...passed 00:17:12.093 Test: blockdev write zeroes read split partial ...passed 00:17:12.093 Test: blockdev reset ...passed 00:17:12.093 Test: blockdev write read 8 blocks ...passed 00:17:12.093 Test: blockdev write read size > 128k ...passed 00:17:12.093 Test: blockdev write read invalid size ...passed 00:17:12.093 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:12.093 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:12.093 Test: blockdev write read max offset ...passed 00:17:12.093 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:12.093 Test: blockdev writev readv 8 blocks ...passed 00:17:12.093 Test: blockdev writev readv 30 x 1block ...passed 00:17:12.093 Test: blockdev writev readv block ...passed 00:17:12.093 Test: blockdev writev readv size > 128k ...passed 00:17:12.093 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:12.093 Test: blockdev comparev and writev ...passed 00:17:12.093 Test: blockdev nvme passthru rw ...passed 00:17:12.093 Test: blockdev nvme passthru vendor specific ...passed 00:17:12.093 Test: blockdev nvme admin passthru ...passed 00:17:12.093 Test: blockdev copy ...passed 00:17:12.093 Suite: bdevio tests on: nvme2n2 00:17:12.093 Test: blockdev write read block ...passed 00:17:12.093 Test: blockdev write zeroes read block ...passed 00:17:12.093 Test: blockdev write zeroes read no split ...passed 00:17:12.351 Test: blockdev write zeroes read split ...passed 00:17:12.352 Test: blockdev write zeroes read split partial ...passed 00:17:12.352 Test: blockdev reset ...passed 00:17:12.352 Test: blockdev write read 8 blocks ...passed 00:17:12.352 Test: blockdev write read size > 128k ...passed 00:17:12.352 Test: blockdev write read invalid size ...passed 00:17:12.352 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:12.352 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:12.352 Test: blockdev write read max offset ...passed 00:17:12.352 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:12.352 Test: blockdev writev readv 8 blocks ...passed 00:17:12.352 Test: blockdev writev readv 30 x 1block ...passed 00:17:12.352 Test: blockdev writev readv block ...passed 00:17:12.352 Test: blockdev writev readv size > 128k ...passed 00:17:12.352 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:12.352 Test: blockdev comparev and writev ...passed 00:17:12.352 Test: blockdev nvme passthru rw ...passed 00:17:12.352 Test: blockdev nvme passthru vendor specific ...passed 00:17:12.352 Test: blockdev nvme admin passthru ...passed 00:17:12.352 Test: blockdev copy ...passed 00:17:12.352 Suite: bdevio tests on: nvme2n1 00:17:12.352 Test: blockdev write read block ...passed 00:17:12.352 Test: blockdev write zeroes read block ...passed 00:17:12.352 Test: blockdev write zeroes read no split ...passed 00:17:12.352 Test: blockdev write zeroes read split ...passed 00:17:12.352 Test: blockdev write zeroes read split partial ...passed 00:17:12.352 Test: blockdev reset ...passed 00:17:12.352 Test: blockdev write read 8 blocks ...passed 00:17:12.352 Test: blockdev write read size > 128k ...passed 00:17:12.352 Test: blockdev write read invalid size ...passed 00:17:12.352 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:12.352 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:12.352 Test: blockdev write read max offset ...passed 00:17:12.352 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:12.352 Test: blockdev writev readv 8 blocks ...passed 00:17:12.352 Test: blockdev writev readv 30 x 1block ...passed 00:17:12.352 Test: blockdev writev readv block ...passed 00:17:12.352 Test: blockdev writev readv size > 128k ...passed 00:17:12.352 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:12.352 Test: blockdev comparev and writev ...passed 00:17:12.352 Test: blockdev nvme passthru rw ...passed 00:17:12.352 Test: blockdev nvme passthru vendor specific ...passed 00:17:12.352 Test: blockdev nvme admin passthru ...passed 00:17:12.352 Test: blockdev copy ...passed 00:17:12.352 Suite: bdevio tests on: nvme1n1 00:17:12.352 Test: blockdev write read block ...passed 00:17:12.352 Test: blockdev write zeroes read block ...passed 00:17:12.352 Test: blockdev write zeroes read no split ...passed 00:17:12.352 Test: blockdev write zeroes read split ...passed 00:17:12.352 Test: blockdev write zeroes read split partial ...passed 00:17:12.352 Test: blockdev reset ...passed 00:17:12.352 Test: blockdev write read 8 blocks ...passed 00:17:12.352 Test: blockdev write read size > 128k ...passed 00:17:12.352 Test: blockdev write read invalid size ...passed 00:17:12.352 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:12.352 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:12.352 Test: blockdev write read max offset ...passed 00:17:12.352 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:12.352 Test: blockdev writev readv 8 blocks ...passed 00:17:12.352 Test: blockdev writev readv 30 x 1block ...passed 00:17:12.352 Test: blockdev writev readv block ...passed 00:17:12.352 Test: blockdev writev readv size > 128k ...passed 00:17:12.352 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:12.352 Test: blockdev comparev and writev ...passed 00:17:12.352 Test: blockdev nvme passthru rw ...passed 00:17:12.352 Test: blockdev nvme passthru vendor specific ...passed 00:17:12.352 Test: blockdev nvme admin passthru ...passed 00:17:12.352 Test: blockdev copy ...passed 00:17:12.352 Suite: bdevio tests on: nvme0n1 00:17:12.352 Test: blockdev write read block ...passed 00:17:12.352 Test: blockdev write zeroes read block ...passed 00:17:12.352 Test: blockdev write zeroes read no split ...passed 00:17:12.610 Test: blockdev write zeroes read split ...passed 00:17:12.610 Test: blockdev write zeroes read split partial ...passed 00:17:12.610 Test: blockdev reset ...passed 00:17:12.610 Test: blockdev write read 8 blocks ...passed 00:17:12.610 Test: blockdev write read size > 128k ...passed 00:17:12.610 Test: blockdev write read invalid size ...passed 00:17:12.610 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:12.610 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:12.610 Test: blockdev write read max offset ...passed 00:17:12.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:12.610 Test: blockdev writev readv 8 blocks ...passed 00:17:12.610 Test: blockdev writev readv 30 x 1block ...passed 00:17:12.610 Test: blockdev writev readv block ...passed 00:17:12.610 Test: blockdev writev readv size > 128k ...passed 00:17:12.610 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:12.610 Test: blockdev comparev and writev ...passed 00:17:12.610 Test: blockdev nvme passthru rw ...passed 00:17:12.610 Test: blockdev nvme passthru vendor specific ...passed 00:17:12.610 Test: blockdev nvme admin passthru ...passed 00:17:12.610 Test: blockdev copy ...passed 00:17:12.610 00:17:12.610 Run Summary: Type Total Ran Passed Failed Inactive 00:17:12.610 suites 6 6 n/a 0 0 00:17:12.610 tests 138 138 138 0 0 00:17:12.610 asserts 780 780 780 0 n/a 00:17:12.610 00:17:12.610 Elapsed time = 1.317 seconds 00:17:12.610 0 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71518 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71518 ']' 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71518 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71518 00:17:12.610 killing process with pid 71518 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71518' 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71518 00:17:12.610 12:27:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71518 00:17:13.986 12:27:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:13.986 00:17:13.986 real 0m2.865s 00:17:13.986 user 0m6.649s 00:17:13.986 sys 0m0.434s 00:17:13.986 ************************************ 00:17:13.986 END TEST bdev_bounds 00:17:13.986 ************************************ 00:17:13.986 12:27:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.986 12:27:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:13.987 12:27:37 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:17:13.987 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:13.987 12:27:37 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.987 12:27:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:13.987 ************************************ 00:17:13.987 START TEST bdev_nbd 00:17:13.987 ************************************ 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71578 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71578 /var/tmp/spdk-nbd.sock 00:17:13.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71578 ']' 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.987 12:27:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:13.987 [2024-10-07 12:27:37.199265] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:13.987 [2024-10-07 12:27:37.199403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.246 [2024-10-07 12:27:37.378703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.505 [2024-10-07 12:27:37.593783] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:15.074 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.075 1+0 records in 00:17:15.075 1+0 records out 00:17:15.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750252 s, 5.5 MB/s 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:15.075 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.334 1+0 records in 00:17:15.334 1+0 records out 00:17:15.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617245 s, 6.6 MB/s 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:15.334 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.593 1+0 records in 00:17:15.593 1+0 records out 00:17:15.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775513 s, 5.3 MB/s 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:15.593 12:27:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.852 1+0 records in 00:17:15.852 1+0 records out 00:17:15.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781458 s, 5.2 MB/s 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:15.852 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.112 1+0 records in 00:17:16.112 1+0 records out 00:17:16.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788226 s, 5.2 MB/s 00:17:16.112 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:16.371 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.630 1+0 records in 00:17:16.630 1+0 records out 00:17:16.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768855 s, 5.3 MB/s 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd0", 00:17:16.630 "bdev_name": "nvme0n1" 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd1", 00:17:16.630 "bdev_name": "nvme1n1" 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd2", 00:17:16.630 "bdev_name": "nvme2n1" 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd3", 00:17:16.630 "bdev_name": "nvme2n2" 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd4", 00:17:16.630 "bdev_name": "nvme2n3" 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd5", 00:17:16.630 "bdev_name": "nvme3n1" 00:17:16.630 } 00:17:16.630 ]' 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:16.630 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd0", 00:17:16.630 "bdev_name": "nvme0n1" 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd1", 00:17:16.630 "bdev_name": "nvme1n1" 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd2", 00:17:16.630 "bdev_name": "nvme2n1" 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd3", 00:17:16.630 "bdev_name": "nvme2n2" 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd4", 00:17:16.630 "bdev_name": "nvme2n3" 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "nbd_device": "/dev/nbd5", 00:17:16.630 "bdev_name": "nvme3n1" 00:17:16.630 } 00:17:16.630 ]' 00:17:16.889 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:16.889 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:16.889 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:16.889 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.889 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:16.889 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.889 12:27:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:16.889 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:16.889 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:16.889 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:16.889 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.889 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.889 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:16.889 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:16.889 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.889 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.889 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:17.148 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:17.148 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:17.148 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:17.148 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.148 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.148 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:17.148 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:17.148 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.148 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.148 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:17.407 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:17.407 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:17.407 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:17.407 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.407 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.407 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:17.407 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:17.407 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.407 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.407 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:17.666 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:17.666 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:17.666 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:17.666 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.666 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.666 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:17.666 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:17.666 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.666 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.666 12:27:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:17.925 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:17.925 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:17.925 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:17.925 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.925 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.925 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:17.925 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:17.925 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.925 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.925 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:18.184 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:18.444 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:18.744 /dev/nbd0 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.744 1+0 records in 00:17:18.744 1+0 records out 00:17:18.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642551 s, 6.4 MB/s 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:18.744 12:27:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:17:19.004 /dev/nbd1 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.004 1+0 records in 00:17:19.004 1+0 records out 00:17:19.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731024 s, 5.6 MB/s 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:19.004 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:17:19.263 /dev/nbd10 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.263 1+0 records in 00:17:19.263 1+0 records out 00:17:19.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536446 s, 7.6 MB/s 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:19.263 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:17:19.522 /dev/nbd11 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.522 1+0 records in 00:17:19.522 1+0 records out 00:17:19.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692414 s, 5.9 MB/s 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:19.522 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:17:19.781 /dev/nbd12 00:17:19.781 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:19.781 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:19.781 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:17:19.781 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:19.781 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:19.781 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:19.781 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:17:19.781 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:19.781 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:19.781 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:19.782 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.782 1+0 records in 00:17:19.782 1+0 records out 00:17:19.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00080036 s, 5.1 MB/s 00:17:19.782 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.782 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:19.782 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.782 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:19.782 12:27:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:19.782 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.782 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:19.782 12:27:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:20.041 /dev/nbd13 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:20.041 1+0 records in 00:17:20.041 1+0 records out 00:17:20.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840287 s, 4.9 MB/s 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:20.041 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd0", 00:17:20.300 "bdev_name": "nvme0n1" 00:17:20.300 }, 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd1", 00:17:20.300 "bdev_name": "nvme1n1" 00:17:20.300 }, 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd10", 00:17:20.300 "bdev_name": "nvme2n1" 00:17:20.300 }, 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd11", 00:17:20.300 "bdev_name": "nvme2n2" 00:17:20.300 }, 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd12", 00:17:20.300 "bdev_name": "nvme2n3" 00:17:20.300 }, 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd13", 00:17:20.300 "bdev_name": "nvme3n1" 00:17:20.300 } 00:17:20.300 ]' 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd0", 00:17:20.300 "bdev_name": "nvme0n1" 00:17:20.300 }, 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd1", 00:17:20.300 "bdev_name": "nvme1n1" 00:17:20.300 }, 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd10", 00:17:20.300 "bdev_name": "nvme2n1" 00:17:20.300 }, 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd11", 00:17:20.300 "bdev_name": "nvme2n2" 00:17:20.300 }, 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd12", 00:17:20.300 "bdev_name": "nvme2n3" 00:17:20.300 }, 00:17:20.300 { 00:17:20.300 "nbd_device": "/dev/nbd13", 00:17:20.300 "bdev_name": "nvme3n1" 00:17:20.300 } 00:17:20.300 ]' 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:20.300 /dev/nbd1 00:17:20.300 /dev/nbd10 00:17:20.300 /dev/nbd11 00:17:20.300 /dev/nbd12 00:17:20.300 /dev/nbd13' 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:20.300 /dev/nbd1 00:17:20.300 /dev/nbd10 00:17:20.300 /dev/nbd11 00:17:20.300 /dev/nbd12 00:17:20.300 /dev/nbd13' 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:20.300 256+0 records in 00:17:20.300 256+0 records out 00:17:20.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137399 s, 76.3 MB/s 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:20.300 256+0 records in 00:17:20.300 256+0 records out 00:17:20.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122989 s, 8.5 MB/s 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:20.300 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:20.560 256+0 records in 00:17:20.560 256+0 records out 00:17:20.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15334 s, 6.8 MB/s 00:17:20.560 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:20.560 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:20.819 256+0 records in 00:17:20.819 256+0 records out 00:17:20.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135048 s, 7.8 MB/s 00:17:20.819 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:20.819 12:27:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:20.819 256+0 records in 00:17:20.819 256+0 records out 00:17:20.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129712 s, 8.1 MB/s 00:17:20.819 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:20.819 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:21.079 256+0 records in 00:17:21.079 256+0 records out 00:17:21.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161087 s, 6.5 MB/s 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:21.079 256+0 records in 00:17:21.079 256+0 records out 00:17:21.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130197 s, 8.1 MB/s 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:21.079 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:21.338 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:21.597 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:21.597 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:21.597 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:21.598 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:21.598 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:21.598 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:21.598 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:21.598 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:21.598 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:21.598 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:21.598 12:27:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:21.856 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:21.856 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:21.856 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:21.856 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:21.856 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:21.856 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:21.856 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:21.857 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:21.857 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:21.857 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:22.116 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:22.116 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:22.116 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:22.116 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.116 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.116 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:22.116 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:22.116 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.116 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.116 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:22.375 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:22.375 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:22.375 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:22.375 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.375 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.375 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:22.375 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:22.375 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.375 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.375 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.634 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:22.893 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:22.893 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:22.893 12:27:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:22.893 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:23.152 malloc_lvol_verify 00:17:23.152 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:23.411 343e3fe6-d64e-4545-a1ff-221bc996f940 00:17:23.411 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:23.411 b42688c4-4905-4c4a-801c-73f258534105 00:17:23.411 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:23.670 /dev/nbd0 00:17:23.670 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:23.670 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:23.670 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:23.670 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:23.671 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:23.671 mke2fs 1.47.0 (5-Feb-2023) 00:17:23.671 Discarding device blocks: 0/4096 done 00:17:23.671 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:23.671 00:17:23.671 Allocating group tables: 0/1 done 00:17:23.671 Writing inode tables: 0/1 done 00:17:23.671 Creating journal (1024 blocks): done 00:17:23.671 Writing superblocks and filesystem accounting information: 0/1 done 00:17:23.671 00:17:23.671 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:23.671 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.671 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:23.671 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.671 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:23.671 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.671 12:27:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71578 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71578 ']' 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71578 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71578 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:23.930 killing process with pid 71578 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71578' 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71578 00:17:23.930 12:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71578 00:17:25.308 12:27:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:25.308 00:17:25.308 real 0m11.409s 00:17:25.308 user 0m14.463s 00:17:25.308 sys 0m4.865s 00:17:25.308 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.308 12:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:25.308 ************************************ 00:17:25.308 END TEST bdev_nbd 00:17:25.308 ************************************ 00:17:25.308 12:27:48 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:25.308 12:27:48 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:17:25.308 12:27:48 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:17:25.308 12:27:48 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:25.308 12:27:48 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:25.308 12:27:48 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.308 12:27:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:25.308 ************************************ 00:17:25.308 START TEST bdev_fio 00:17:25.308 ************************************ 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:25.308 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:25.308 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:25.309 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:25.309 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:25.309 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:25.309 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:25.309 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:25.309 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.309 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:25.309 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:25.309 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:25.568 ************************************ 00:17:25.568 START TEST bdev_fio_rw_verify 00:17:25.568 ************************************ 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:25.568 12:27:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.828 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:25.828 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:25.828 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:25.828 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:25.828 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:25.828 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:25.828 fio-3.35 00:17:25.828 Starting 6 threads 00:17:38.034 00:17:38.034 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71994: Mon Oct 7 12:27:59 2024 00:17:38.034 read: IOPS=34.1k, BW=133MiB/s (140MB/s)(1333MiB/10001msec) 00:17:38.034 slat (usec): min=2, max=933, avg= 7.31, stdev= 6.38 00:17:38.034 clat (usec): min=83, max=4663, avg=521.86, stdev=238.14 00:17:38.034 lat (usec): min=86, max=4674, avg=529.17, stdev=239.18 00:17:38.034 clat percentiles (usec): 00:17:38.034 | 50.000th=[ 515], 99.000th=[ 1172], 99.900th=[ 1975], 99.990th=[ 3687], 00:17:38.034 | 99.999th=[ 4490] 00:17:38.034 write: IOPS=34.5k, BW=135MiB/s (141MB/s)(1349MiB/10001msec); 0 zone resets 00:17:38.034 slat (usec): min=10, max=6161, avg=25.56, stdev=36.44 00:17:38.034 clat (usec): min=79, max=9177, avg=631.41, stdev=258.78 00:17:38.034 lat (usec): min=92, max=9221, avg=656.97, stdev=264.11 00:17:38.034 clat percentiles (usec): 00:17:38.034 | 50.000th=[ 619], 99.000th=[ 1418], 99.900th=[ 2073], 99.990th=[ 2933], 00:17:38.034 | 99.999th=[ 9110] 00:17:38.034 bw ( KiB/s): min=110105, max=166208, per=99.53%, avg=137487.37, stdev=2522.92, samples=114 00:17:38.034 iops : min=27526, max=41552, avg=34371.53, stdev=630.73, samples=114 00:17:38.034 lat (usec) : 100=0.01%, 250=8.26%, 500=31.38%, 750=40.04%, 1000=15.45% 00:17:38.034 lat (msec) : 2=4.76%, 4=0.11%, 10=0.01% 00:17:38.034 cpu : usr=55.13%, sys=29.49%, ctx=8107, majf=0, minf=28268 00:17:38.034 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.034 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.034 issued rwts: total=341258,345378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.034 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:38.034 00:17:38.034 Run status group 0 (all jobs): 00:17:38.034 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=1333MiB (1398MB), run=10001-10001msec 00:17:38.034 WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=1349MiB (1415MB), run=10001-10001msec 00:17:38.034 ----------------------------------------------------- 00:17:38.034 Suppressions used: 00:17:38.034 count bytes template 00:17:38.034 6 48 /usr/src/fio/parse.c 00:17:38.034 3870 371520 /usr/src/fio/iolog.c 00:17:38.034 1 8 libtcmalloc_minimal.so 00:17:38.034 1 904 libcrypto.so 00:17:38.034 ----------------------------------------------------- 00:17:38.034 00:17:38.034 00:17:38.034 real 0m12.616s 00:17:38.034 user 0m35.181s 00:17:38.034 sys 0m18.137s 00:17:38.034 12:28:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:38.034 12:28:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:38.034 ************************************ 00:17:38.034 END TEST bdev_fio_rw_verify 00:17:38.034 ************************************ 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:38.294 12:28:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:38.295 12:28:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "6f8b8164-fb30-4a98-804f-2125111ce56d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6f8b8164-fb30-4a98-804f-2125111ce56d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "738c8c77-4a4f-42ef-9768-c4fcdda2ba93"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "738c8c77-4a4f-42ef-9768-c4fcdda2ba93",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "34dae3a6-8493-4ed8-ad85-e6e57f7b3fe1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "34dae3a6-8493-4ed8-ad85-e6e57f7b3fe1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "78ee0601-4d24-4e67-9af2-d04c02b0d51d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78ee0601-4d24-4e67-9af2-d04c02b0d51d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "4a542ac7-4d0b-482b-bb5f-6367973933a7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4a542ac7-4d0b-482b-bb5f-6367973933a7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b6bd39b2-5bac-4763-a772-57bd2fd643a0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b6bd39b2-5bac-4763-a772-57bd2fd643a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:38.295 12:28:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:38.295 12:28:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:38.295 /home/vagrant/spdk_repo/spdk 00:17:38.295 12:28:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:38.295 12:28:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:38.295 12:28:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:38.295 00:17:38.295 real 0m12.850s 00:17:38.295 user 0m35.279s 00:17:38.295 sys 0m18.280s 00:17:38.295 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:38.295 12:28:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:38.295 ************************************ 00:17:38.295 END TEST bdev_fio 00:17:38.295 ************************************ 00:17:38.295 12:28:01 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:38.295 12:28:01 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:38.295 12:28:01 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:38.295 12:28:01 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:38.295 12:28:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:38.295 ************************************ 00:17:38.295 START TEST bdev_verify 00:17:38.295 ************************************ 00:17:38.295 12:28:01 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:38.554 [2024-10-07 12:28:01.588044] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:38.554 [2024-10-07 12:28:01.588182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72175 ] 00:17:38.554 [2024-10-07 12:28:01.761765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:38.812 [2024-10-07 12:28:02.026050] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.812 [2024-10-07 12:28:02.026109] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.380 Running I/O for 5 seconds... 00:17:41.694 23285.00 IOPS, 90.96 MiB/s [2024-10-07T12:28:05.921Z] 24368.00 IOPS, 95.19 MiB/s [2024-10-07T12:28:06.855Z] 24476.33 IOPS, 95.61 MiB/s [2024-10-07T12:28:07.813Z] 24352.00 IOPS, 95.12 MiB/s [2024-10-07T12:28:07.813Z] 24531.20 IOPS, 95.82 MiB/s 00:17:44.522 Latency(us) 00:17:44.522 [2024-10-07T12:28:07.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.522 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0x0 length 0xa0000 00:17:44.522 nvme0n1 : 5.04 2059.83 8.05 0.00 0.00 61904.92 8001.18 56008.28 00:17:44.522 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0xa0000 length 0xa0000 00:17:44.522 nvme0n1 : 5.07 1692.49 6.61 0.00 0.00 74722.71 5158.66 74537.33 00:17:44.522 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0x0 length 0xbd0bd 00:17:44.522 nvme1n1 : 5.05 3123.39 12.20 0.00 0.00 40778.01 4816.50 46112.08 00:17:44.522 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:44.522 nvme1n1 : 5.04 2519.11 9.84 0.00 0.00 50599.22 4737.54 70326.18 00:17:44.522 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0x0 length 0x80000 00:17:44.522 nvme2n1 : 5.04 2058.38 8.04 0.00 0.00 61842.21 10317.31 51586.57 00:17:44.522 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0x80000 length 0x80000 00:17:44.522 nvme2n1 : 5.06 1720.10 6.72 0.00 0.00 73991.09 13107.20 70747.30 00:17:44.522 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0x0 length 0x80000 00:17:44.522 nvme2n2 : 5.04 2083.09 8.14 0.00 0.00 61025.39 11843.86 45901.52 00:17:44.522 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0x80000 length 0x80000 00:17:44.522 nvme2n2 : 5.04 1676.78 6.55 0.00 0.00 75713.88 14949.58 79169.59 00:17:44.522 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0x0 length 0x80000 00:17:44.522 nvme2n3 : 5.05 2054.24 8.02 0.00 0.00 61797.23 9053.97 50954.90 00:17:44.522 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0x80000 length 0x80000 00:17:44.522 nvme2n3 : 5.05 1671.43 6.53 0.00 0.00 75836.60 12212.33 62325.00 00:17:44.522 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0x0 length 0x20000 00:17:44.522 nvme3n1 : 5.05 2053.64 8.02 0.00 0.00 61755.85 4790.18 56008.28 00:17:44.522 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:44.522 Verification LBA range: start 0x20000 length 0x20000 00:17:44.522 nvme3n1 : 5.07 1691.59 6.61 0.00 0.00 74849.60 5316.58 70326.18 00:17:44.522 [2024-10-07T12:28:07.813Z] =================================================================================================================== 00:17:44.522 [2024-10-07T12:28:07.814Z] Total : 24404.06 95.33 0.00 0.00 62481.29 4737.54 79169.59 00:17:45.897 00:17:45.897 real 0m7.624s 00:17:45.897 user 0m11.593s 00:17:45.897 sys 0m1.972s 00:17:45.897 12:28:09 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.897 12:28:09 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:45.897 ************************************ 00:17:45.897 END TEST bdev_verify 00:17:45.897 ************************************ 00:17:45.897 12:28:09 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:45.897 12:28:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:45.897 12:28:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.897 12:28:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.156 ************************************ 00:17:46.156 START TEST bdev_verify_big_io 00:17:46.156 ************************************ 00:17:46.156 12:28:09 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:46.156 [2024-10-07 12:28:09.290743] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:46.156 [2024-10-07 12:28:09.290862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72285 ] 00:17:46.414 [2024-10-07 12:28:09.463501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:46.671 [2024-10-07 12:28:09.717783] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.671 [2024-10-07 12:28:09.717824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.236 Running I/O for 5 seconds... 00:17:51.899 2190.00 IOPS, 136.88 MiB/s [2024-10-07T12:28:16.155Z] 2894.00 IOPS, 180.88 MiB/s [2024-10-07T12:28:16.723Z] 3315.33 IOPS, 207.21 MiB/s 00:17:53.432 Latency(us) 00:17:53.432 [2024-10-07T12:28:16.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.432 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0x0 length 0xa000 00:17:53.432 nvme0n1 : 5.50 171.52 10.72 0.00 0.00 729005.25 12054.41 1064578.36 00:17:53.432 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0xa000 length 0xa000 00:17:53.432 nvme0n1 : 5.68 146.39 9.15 0.00 0.00 846609.16 98540.88 1172383.77 00:17:53.432 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0x0 length 0xbd0b 00:17:53.432 nvme1n1 : 5.52 139.05 8.69 0.00 0.00 884252.51 29478.04 2371718.89 00:17:53.432 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:53.432 nvme1n1 : 5.64 153.19 9.57 0.00 0.00 772986.97 10212.04 1448635.12 00:17:53.432 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0x0 length 0x8000 00:17:53.432 nvme2n1 : 5.51 151.08 9.44 0.00 0.00 802120.74 15265.41 848967.56 00:17:53.432 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0x8000 length 0x8000 00:17:53.432 nvme2n1 : 5.69 174.48 10.91 0.00 0.00 655748.32 48217.65 929821.61 00:17:53.432 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0x0 length 0x8000 00:17:53.432 nvme2n2 : 5.49 174.74 10.92 0.00 0.00 683570.31 118754.39 710841.88 00:17:53.432 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0x8000 length 0x8000 00:17:53.432 nvme2n2 : 5.78 130.06 8.13 0.00 0.00 853167.93 51376.01 1516013.49 00:17:53.432 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0x0 length 0x8000 00:17:53.432 nvme2n3 : 5.52 207.22 12.95 0.00 0.00 573352.39 10738.43 717579.72 00:17:53.432 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0x8000 length 0x8000 00:17:53.432 nvme2n3 : 5.88 151.61 9.48 0.00 0.00 708019.42 22319.09 2183059.43 00:17:53.432 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0x0 length 0x2000 00:17:53.432 nvme3n1 : 5.52 179.65 11.23 0.00 0.00 651213.14 11949.13 1017413.50 00:17:53.432 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:53.432 Verification LBA range: start 0x2000 length 0x2000 00:17:53.432 nvme3n1 : 6.07 278.54 17.41 0.00 0.00 376418.95 3145.20 1886594.57 00:17:53.432 [2024-10-07T12:28:16.723Z] =================================================================================================================== 00:17:53.432 [2024-10-07T12:28:16.723Z] Total : 2057.54 128.60 0.00 0.00 679490.38 3145.20 2371718.89 00:17:54.811 00:17:54.811 real 0m8.811s 00:17:54.811 user 0m15.646s 00:17:54.811 sys 0m0.728s 00:17:54.811 12:28:18 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:54.811 12:28:18 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.811 ************************************ 00:17:54.811 END TEST bdev_verify_big_io 00:17:54.811 ************************************ 00:17:54.811 12:28:18 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:54.811 12:28:18 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:54.811 12:28:18 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:54.811 12:28:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.811 ************************************ 00:17:54.811 START TEST bdev_write_zeroes 00:17:54.811 ************************************ 00:17:54.811 12:28:18 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:55.071 [2024-10-07 12:28:18.176997] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:55.071 [2024-10-07 12:28:18.177121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72401 ] 00:17:55.071 [2024-10-07 12:28:18.348268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.328 [2024-10-07 12:28:18.557615] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.895 Running I/O for 1 seconds... 00:17:56.832 56640.00 IOPS, 221.25 MiB/s 00:17:56.832 Latency(us) 00:17:56.832 [2024-10-07T12:28:20.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.832 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.832 nvme0n1 : 1.02 8868.52 34.64 0.00 0.00 14419.35 7948.54 38321.45 00:17:56.832 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.832 nvme1n1 : 1.03 11706.03 45.73 0.00 0.00 10916.91 5448.17 36426.44 00:17:56.832 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.832 nvme2n1 : 1.03 8806.96 34.40 0.00 0.00 14413.36 8317.02 37268.67 00:17:56.832 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.832 nvme2n2 : 1.03 8795.83 34.36 0.00 0.00 14422.55 8264.38 36636.99 00:17:56.832 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.832 nvme2n3 : 1.03 8784.66 34.32 0.00 0.00 14432.89 8317.02 36215.88 00:17:56.832 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:56.832 nvme3n1 : 1.04 8773.56 34.27 0.00 0.00 14442.01 8264.38 35584.21 00:17:56.832 [2024-10-07T12:28:20.123Z] =================================================================================================================== 00:17:56.832 [2024-10-07T12:28:20.123Z] Total : 55735.55 217.72 0.00 0.00 13689.84 5448.17 38321.45 00:17:58.209 00:17:58.209 real 0m3.218s 00:17:58.209 user 0m2.396s 00:17:58.209 sys 0m0.635s 00:17:58.209 12:28:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:58.209 12:28:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:58.209 ************************************ 00:17:58.209 END TEST bdev_write_zeroes 00:17:58.209 ************************************ 00:17:58.209 12:28:21 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:58.209 12:28:21 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:58.209 12:28:21 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:58.209 12:28:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:58.209 ************************************ 00:17:58.209 START TEST bdev_json_nonenclosed 00:17:58.209 ************************************ 00:17:58.209 12:28:21 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:58.209 [2024-10-07 12:28:21.472531] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:58.209 [2024-10-07 12:28:21.472652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72465 ] 00:17:58.468 [2024-10-07 12:28:21.644753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.727 [2024-10-07 12:28:21.848040] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.727 [2024-10-07 12:28:21.848156] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:58.727 [2024-10-07 12:28:21.848179] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:58.727 [2024-10-07 12:28:21.848192] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:58.986 00:17:58.986 real 0m0.876s 00:17:58.986 user 0m0.621s 00:17:58.986 sys 0m0.149s 00:17:58.986 12:28:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:58.986 12:28:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:58.986 ************************************ 00:17:58.986 END TEST bdev_json_nonenclosed 00:17:58.986 ************************************ 00:17:59.245 12:28:22 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.245 12:28:22 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:59.245 12:28:22 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:59.245 12:28:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:59.245 ************************************ 00:17:59.245 START TEST bdev_json_nonarray 00:17:59.245 ************************************ 00:17:59.245 12:28:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.245 [2024-10-07 12:28:22.423983] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:59.245 [2024-10-07 12:28:22.424107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72496 ] 00:17:59.505 [2024-10-07 12:28:22.595212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.764 [2024-10-07 12:28:22.804363] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.764 [2024-10-07 12:28:22.804478] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:59.764 [2024-10-07 12:28:22.804500] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:59.764 [2024-10-07 12:28:22.804512] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.026 00:18:00.026 real 0m0.884s 00:18:00.026 user 0m0.616s 00:18:00.026 sys 0m0.162s 00:18:00.026 12:28:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.026 12:28:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:00.026 ************************************ 00:18:00.026 END TEST bdev_json_nonarray 00:18:00.026 ************************************ 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:18:00.026 12:28:23 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:00.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:02.342 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.342 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.342 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.601 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.601 00:18:02.601 real 1m4.561s 00:18:02.601 user 1m40.767s 00:18:02.601 sys 0m31.603s 00:18:02.601 12:28:25 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:02.601 12:28:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:02.601 ************************************ 00:18:02.601 END TEST blockdev_xnvme 00:18:02.601 ************************************ 00:18:02.601 12:28:25 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:02.601 12:28:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:02.601 12:28:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:02.601 12:28:25 -- common/autotest_common.sh@10 -- # set +x 00:18:02.601 ************************************ 00:18:02.601 START TEST ublk 00:18:02.601 ************************************ 00:18:02.601 12:28:25 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:02.861 * Looking for test storage... 00:18:02.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:02.861 12:28:25 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:02.861 12:28:26 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.861 12:28:26 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.861 12:28:26 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.861 12:28:26 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.861 12:28:26 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.861 12:28:26 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.861 12:28:26 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.861 12:28:26 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.861 12:28:26 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.861 12:28:26 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.861 12:28:26 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.861 12:28:26 ublk -- scripts/common.sh@344 -- # case "$op" in 00:18:02.861 12:28:26 ublk -- scripts/common.sh@345 -- # : 1 00:18:02.861 12:28:26 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.861 12:28:26 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.861 12:28:26 ublk -- scripts/common.sh@365 -- # decimal 1 00:18:02.861 12:28:26 ublk -- scripts/common.sh@353 -- # local d=1 00:18:02.861 12:28:26 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.861 12:28:26 ublk -- scripts/common.sh@355 -- # echo 1 00:18:02.861 12:28:26 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.861 12:28:26 ublk -- scripts/common.sh@366 -- # decimal 2 00:18:02.861 12:28:26 ublk -- scripts/common.sh@353 -- # local d=2 00:18:02.861 12:28:26 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.861 12:28:26 ublk -- scripts/common.sh@355 -- # echo 2 00:18:02.861 12:28:26 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.861 12:28:26 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.861 12:28:26 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.861 12:28:26 ublk -- scripts/common.sh@368 -- # return 0 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:02.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.861 --rc genhtml_branch_coverage=1 00:18:02.861 --rc genhtml_function_coverage=1 00:18:02.861 --rc genhtml_legend=1 00:18:02.861 --rc geninfo_all_blocks=1 00:18:02.861 --rc geninfo_unexecuted_blocks=1 00:18:02.861 00:18:02.861 ' 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:02.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.861 --rc genhtml_branch_coverage=1 00:18:02.861 --rc genhtml_function_coverage=1 00:18:02.861 --rc genhtml_legend=1 00:18:02.861 --rc geninfo_all_blocks=1 00:18:02.861 --rc geninfo_unexecuted_blocks=1 00:18:02.861 00:18:02.861 ' 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:02.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.861 --rc genhtml_branch_coverage=1 00:18:02.861 --rc genhtml_function_coverage=1 00:18:02.861 --rc genhtml_legend=1 00:18:02.861 --rc geninfo_all_blocks=1 00:18:02.861 --rc geninfo_unexecuted_blocks=1 00:18:02.861 00:18:02.861 ' 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:02.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.861 --rc genhtml_branch_coverage=1 00:18:02.861 --rc genhtml_function_coverage=1 00:18:02.861 --rc genhtml_legend=1 00:18:02.861 --rc geninfo_all_blocks=1 00:18:02.861 --rc geninfo_unexecuted_blocks=1 00:18:02.861 00:18:02.861 ' 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:02.861 12:28:26 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:02.861 12:28:26 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:02.861 12:28:26 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:02.861 12:28:26 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:02.861 12:28:26 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:02.861 12:28:26 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:02.861 12:28:26 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:02.861 12:28:26 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:02.861 12:28:26 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:02.861 12:28:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:02.861 ************************************ 00:18:02.861 START TEST test_save_ublk_config 00:18:02.861 ************************************ 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72797 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72797 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72797 ']' 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.861 12:28:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:03.120 [2024-10-07 12:28:26.253051] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:18:03.120 [2024-10-07 12:28:26.253198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72797 ] 00:18:03.379 [2024-10-07 12:28:26.418504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.379 [2024-10-07 12:28:26.642093] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.316 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.316 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:18:04.316 12:28:27 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:04.316 12:28:27 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:04.316 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.316 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:04.316 [2024-10-07 12:28:27.505953] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:04.316 [2024-10-07 12:28:27.507074] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:04.316 malloc0 00:18:04.316 [2024-10-07 12:28:27.584067] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:04.316 [2024-10-07 12:28:27.584174] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:04.316 [2024-10-07 12:28:27.584187] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:04.316 [2024-10-07 12:28:27.584198] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:04.316 [2024-10-07 12:28:27.593059] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:04.316 [2024-10-07 12:28:27.593088] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:04.317 [2024-10-07 12:28:27.600552] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:04.317 [2024-10-07 12:28:27.600656] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:04.575 [2024-10-07 12:28:27.617932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:04.575 0 00:18:04.575 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.575 12:28:27 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:04.575 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.575 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:04.835 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.835 12:28:27 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:04.835 "subsystems": [ 00:18:04.835 { 00:18:04.835 "subsystem": "fsdev", 00:18:04.835 "config": [ 00:18:04.835 { 00:18:04.835 "method": "fsdev_set_opts", 00:18:04.835 "params": { 00:18:04.835 "fsdev_io_pool_size": 65535, 00:18:04.835 "fsdev_io_cache_size": 256 00:18:04.835 } 00:18:04.835 } 00:18:04.835 ] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "keyring", 00:18:04.835 "config": [] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "iobuf", 00:18:04.835 "config": [ 00:18:04.835 { 00:18:04.835 "method": "iobuf_set_options", 00:18:04.835 "params": { 00:18:04.835 "small_pool_count": 8192, 00:18:04.835 "large_pool_count": 1024, 00:18:04.835 "small_bufsize": 8192, 00:18:04.835 "large_bufsize": 135168 00:18:04.835 } 00:18:04.835 } 00:18:04.835 ] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "sock", 00:18:04.835 "config": [ 00:18:04.835 { 00:18:04.835 "method": "sock_set_default_impl", 00:18:04.835 "params": { 00:18:04.835 "impl_name": "posix" 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "sock_impl_set_options", 00:18:04.835 "params": { 00:18:04.835 "impl_name": "ssl", 00:18:04.835 "recv_buf_size": 4096, 00:18:04.835 "send_buf_size": 4096, 00:18:04.835 "enable_recv_pipe": true, 00:18:04.835 "enable_quickack": false, 00:18:04.835 "enable_placement_id": 0, 00:18:04.835 "enable_zerocopy_send_server": true, 00:18:04.835 "enable_zerocopy_send_client": false, 00:18:04.835 "zerocopy_threshold": 0, 00:18:04.835 "tls_version": 0, 00:18:04.835 "enable_ktls": false 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "sock_impl_set_options", 00:18:04.835 "params": { 00:18:04.835 "impl_name": "posix", 00:18:04.835 "recv_buf_size": 2097152, 00:18:04.835 "send_buf_size": 2097152, 00:18:04.835 "enable_recv_pipe": true, 00:18:04.835 "enable_quickack": false, 00:18:04.835 "enable_placement_id": 0, 00:18:04.835 "enable_zerocopy_send_server": true, 00:18:04.835 "enable_zerocopy_send_client": false, 00:18:04.835 "zerocopy_threshold": 0, 00:18:04.835 "tls_version": 0, 00:18:04.835 "enable_ktls": false 00:18:04.835 } 00:18:04.835 } 00:18:04.835 ] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "vmd", 00:18:04.835 "config": [] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "accel", 00:18:04.835 "config": [ 00:18:04.835 { 00:18:04.835 "method": "accel_set_options", 00:18:04.835 "params": { 00:18:04.835 "small_cache_size": 128, 00:18:04.835 "large_cache_size": 16, 00:18:04.835 "task_count": 2048, 00:18:04.835 "sequence_count": 2048, 00:18:04.835 "buf_count": 2048 00:18:04.835 } 00:18:04.835 } 00:18:04.835 ] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "bdev", 00:18:04.835 "config": [ 00:18:04.835 { 00:18:04.835 "method": "bdev_set_options", 00:18:04.835 "params": { 00:18:04.835 "bdev_io_pool_size": 65535, 00:18:04.835 "bdev_io_cache_size": 256, 00:18:04.835 "bdev_auto_examine": true, 00:18:04.835 "iobuf_small_cache_size": 128, 00:18:04.835 "iobuf_large_cache_size": 16 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "bdev_raid_set_options", 00:18:04.835 "params": { 00:18:04.835 "process_window_size_kb": 1024, 00:18:04.835 "process_max_bandwidth_mb_sec": 0 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "bdev_iscsi_set_options", 00:18:04.835 "params": { 00:18:04.835 "timeout_sec": 30 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "bdev_nvme_set_options", 00:18:04.835 "params": { 00:18:04.835 "action_on_timeout": "none", 00:18:04.835 "timeout_us": 0, 00:18:04.835 "timeout_admin_us": 0, 00:18:04.835 "keep_alive_timeout_ms": 10000, 00:18:04.835 "arbitration_burst": 0, 00:18:04.835 "low_priority_weight": 0, 00:18:04.835 "medium_priority_weight": 0, 00:18:04.835 "high_priority_weight": 0, 00:18:04.835 "nvme_adminq_poll_period_us": 10000, 00:18:04.835 "nvme_ioq_poll_period_us": 0, 00:18:04.835 "io_queue_requests": 0, 00:18:04.835 "delay_cmd_submit": true, 00:18:04.835 "transport_retry_count": 4, 00:18:04.835 "bdev_retry_count": 3, 00:18:04.835 "transport_ack_timeout": 0, 00:18:04.835 "ctrlr_loss_timeout_sec": 0, 00:18:04.835 "reconnect_delay_sec": 0, 00:18:04.835 "fast_io_fail_timeout_sec": 0, 00:18:04.835 "disable_auto_failback": false, 00:18:04.835 "generate_uuids": false, 00:18:04.835 "transport_tos": 0, 00:18:04.835 "nvme_error_stat": false, 00:18:04.835 "rdma_srq_size": 0, 00:18:04.835 "io_path_stat": false, 00:18:04.835 "allow_accel_sequence": false, 00:18:04.835 "rdma_max_cq_size": 0, 00:18:04.835 "rdma_cm_event_timeout_ms": 0, 00:18:04.835 "dhchap_digests": [ 00:18:04.835 "sha256", 00:18:04.835 "sha384", 00:18:04.835 "sha512" 00:18:04.835 ], 00:18:04.835 "dhchap_dhgroups": [ 00:18:04.835 "null", 00:18:04.835 "ffdhe2048", 00:18:04.835 "ffdhe3072", 00:18:04.835 "ffdhe4096", 00:18:04.835 "ffdhe6144", 00:18:04.835 "ffdhe8192" 00:18:04.835 ] 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "bdev_nvme_set_hotplug", 00:18:04.835 "params": { 00:18:04.835 "period_us": 100000, 00:18:04.835 "enable": false 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "bdev_malloc_create", 00:18:04.835 "params": { 00:18:04.835 "name": "malloc0", 00:18:04.835 "num_blocks": 8192, 00:18:04.835 "block_size": 4096, 00:18:04.835 "physical_block_size": 4096, 00:18:04.835 "uuid": "fb0ad217-1062-453c-938f-2ec2cb54d092", 00:18:04.835 "optimal_io_boundary": 0, 00:18:04.835 "md_size": 0, 00:18:04.835 "dif_type": 0, 00:18:04.835 "dif_is_head_of_md": false, 00:18:04.835 "dif_pi_format": 0 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "bdev_wait_for_examine" 00:18:04.835 } 00:18:04.835 ] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "scsi", 00:18:04.835 "config": null 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "scheduler", 00:18:04.835 "config": [ 00:18:04.835 { 00:18:04.835 "method": "framework_set_scheduler", 00:18:04.835 "params": { 00:18:04.835 "name": "static" 00:18:04.835 } 00:18:04.835 } 00:18:04.835 ] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "vhost_scsi", 00:18:04.835 "config": [] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "vhost_blk", 00:18:04.835 "config": [] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "ublk", 00:18:04.835 "config": [ 00:18:04.835 { 00:18:04.835 "method": "ublk_create_target", 00:18:04.835 "params": { 00:18:04.835 "cpumask": "1" 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "ublk_start_disk", 00:18:04.835 "params": { 00:18:04.835 "bdev_name": "malloc0", 00:18:04.835 "ublk_id": 0, 00:18:04.835 "num_queues": 1, 00:18:04.835 "queue_depth": 128 00:18:04.835 } 00:18:04.835 } 00:18:04.835 ] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "nbd", 00:18:04.835 "config": [] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "nvmf", 00:18:04.835 "config": [ 00:18:04.835 { 00:18:04.835 "method": "nvmf_set_config", 00:18:04.835 "params": { 00:18:04.835 "discovery_filter": "match_any", 00:18:04.835 "admin_cmd_passthru": { 00:18:04.835 "identify_ctrlr": false 00:18:04.835 }, 00:18:04.835 "dhchap_digests": [ 00:18:04.835 "sha256", 00:18:04.835 "sha384", 00:18:04.835 "sha512" 00:18:04.835 ], 00:18:04.835 "dhchap_dhgroups": [ 00:18:04.835 "null", 00:18:04.835 "ffdhe2048", 00:18:04.835 "ffdhe3072", 00:18:04.835 "ffdhe4096", 00:18:04.835 "ffdhe6144", 00:18:04.835 "ffdhe8192" 00:18:04.835 ] 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "nvmf_set_max_subsystems", 00:18:04.835 "params": { 00:18:04.835 "max_subsystems": 1024 00:18:04.835 } 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "method": "nvmf_set_crdt", 00:18:04.835 "params": { 00:18:04.835 "crdt1": 0, 00:18:04.835 "crdt2": 0, 00:18:04.835 "crdt3": 0 00:18:04.835 } 00:18:04.835 } 00:18:04.835 ] 00:18:04.835 }, 00:18:04.835 { 00:18:04.835 "subsystem": "iscsi", 00:18:04.835 "config": [ 00:18:04.835 { 00:18:04.835 "method": "iscsi_set_options", 00:18:04.835 "params": { 00:18:04.835 "node_base": "iqn.2016-06.io.spdk", 00:18:04.835 "max_sessions": 128, 00:18:04.836 "max_connections_per_session": 2, 00:18:04.836 "max_queue_depth": 64, 00:18:04.836 "default_time2wait": 2, 00:18:04.836 "default_time2retain": 20, 00:18:04.836 "first_burst_length": 8192, 00:18:04.836 "immediate_data": true, 00:18:04.836 "allow_duplicated_isid": false, 00:18:04.836 "error_recovery_level": 0, 00:18:04.836 "nop_timeout": 60, 00:18:04.836 "nop_in_interval": 30, 00:18:04.836 "disable_chap": false, 00:18:04.836 "require_chap": false, 00:18:04.836 "mutual_chap": false, 00:18:04.836 "chap_group": 0, 00:18:04.836 "max_large_datain_per_connection": 64, 00:18:04.836 "max_r2t_per_connection": 4, 00:18:04.836 "pdu_pool_size": 36864, 00:18:04.836 "immediate_data_pool_size": 16384, 00:18:04.836 "data_out_pool_size": 2048 00:18:04.836 } 00:18:04.836 } 00:18:04.836 ] 00:18:04.836 } 00:18:04.836 ] 00:18:04.836 }' 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72797 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72797 ']' 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72797 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72797 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:04.836 killing process with pid 72797 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72797' 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72797 00:18:04.836 12:28:27 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72797 00:18:06.210 [2024-10-07 12:28:29.364376] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:06.210 [2024-10-07 12:28:29.399933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:06.210 [2024-10-07 12:28:29.400101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:06.210 [2024-10-07 12:28:29.413933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:06.210 [2024-10-07 12:28:29.413995] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:06.210 [2024-10-07 12:28:29.414009] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:06.210 [2024-10-07 12:28:29.414054] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:06.210 [2024-10-07 12:28:29.414201] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:08.745 12:28:31 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72868 00:18:08.745 12:28:31 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72868 00:18:08.745 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72868 ']' 00:18:08.745 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.745 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:08.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.746 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.746 12:28:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:08.746 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:08.746 12:28:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:08.746 12:28:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:08.746 "subsystems": [ 00:18:08.746 { 00:18:08.746 "subsystem": "fsdev", 00:18:08.746 "config": [ 00:18:08.746 { 00:18:08.746 "method": "fsdev_set_opts", 00:18:08.746 "params": { 00:18:08.746 "fsdev_io_pool_size": 65535, 00:18:08.746 "fsdev_io_cache_size": 256 00:18:08.746 } 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "keyring", 00:18:08.746 "config": [] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "iobuf", 00:18:08.746 "config": [ 00:18:08.746 { 00:18:08.746 "method": "iobuf_set_options", 00:18:08.746 "params": { 00:18:08.746 "small_pool_count": 8192, 00:18:08.746 "large_pool_count": 1024, 00:18:08.746 "small_bufsize": 8192, 00:18:08.746 "large_bufsize": 135168 00:18:08.746 } 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "sock", 00:18:08.746 "config": [ 00:18:08.746 { 00:18:08.746 "method": "sock_set_default_impl", 00:18:08.746 "params": { 00:18:08.746 "impl_name": "posix" 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "sock_impl_set_options", 00:18:08.746 "params": { 00:18:08.746 "impl_name": "ssl", 00:18:08.746 "recv_buf_size": 4096, 00:18:08.746 "send_buf_size": 4096, 00:18:08.746 "enable_recv_pipe": true, 00:18:08.746 "enable_quickack": false, 00:18:08.746 "enable_placement_id": 0, 00:18:08.746 "enable_zerocopy_send_server": true, 00:18:08.746 "enable_zerocopy_send_client": false, 00:18:08.746 "zerocopy_threshold": 0, 00:18:08.746 "tls_version": 0, 00:18:08.746 "enable_ktls": false 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "sock_impl_set_options", 00:18:08.746 "params": { 00:18:08.746 "impl_name": "posix", 00:18:08.746 "recv_buf_size": 2097152, 00:18:08.746 "send_buf_size": 2097152, 00:18:08.746 "enable_recv_pipe": true, 00:18:08.746 "enable_quickack": false, 00:18:08.746 "enable_placement_id": 0, 00:18:08.746 "enable_zerocopy_send_server": true, 00:18:08.746 "enable_zerocopy_send_client": false, 00:18:08.746 "zerocopy_threshold": 0, 00:18:08.746 "tls_version": 0, 00:18:08.746 "enable_ktls": false 00:18:08.746 } 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "vmd", 00:18:08.746 "config": [] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "accel", 00:18:08.746 "config": [ 00:18:08.746 { 00:18:08.746 "method": "accel_set_options", 00:18:08.746 "params": { 00:18:08.746 "small_cache_size": 128, 00:18:08.746 "large_cache_size": 16, 00:18:08.746 "task_count": 2048, 00:18:08.746 "sequence_count": 2048, 00:18:08.746 "buf_count": 2048 00:18:08.746 } 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "bdev", 00:18:08.746 "config": [ 00:18:08.746 { 00:18:08.746 "method": "bdev_set_options", 00:18:08.746 "params": { 00:18:08.746 "bdev_io_pool_size": 65535, 00:18:08.746 "bdev_io_cache_size": 256, 00:18:08.746 "bdev_auto_examine": true, 00:18:08.746 "iobuf_small_cache_size": 128, 00:18:08.746 "iobuf_large_cache_size": 16 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "bdev_raid_set_options", 00:18:08.746 "params": { 00:18:08.746 "process_window_size_kb": 1024, 00:18:08.746 "process_max_bandwidth_mb_sec": 0 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "bdev_iscsi_set_options", 00:18:08.746 "params": { 00:18:08.746 "timeout_sec": 30 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "bdev_nvme_set_options", 00:18:08.746 "params": { 00:18:08.746 "action_on_timeout": "none", 00:18:08.746 "timeout_us": 0, 00:18:08.746 "timeout_admin_us": 0, 00:18:08.746 "keep_alive_timeout_ms": 10000, 00:18:08.746 "arbitration_burst": 0, 00:18:08.746 "low_priority_weight": 0, 00:18:08.746 "medium_priority_weight": 0, 00:18:08.746 "high_priority_weight": 0, 00:18:08.746 "nvme_adminq_poll_period_us": 10000, 00:18:08.746 "nvme_ioq_poll_period_us": 0, 00:18:08.746 "io_queue_requests": 0, 00:18:08.746 "delay_cmd_submit": true, 00:18:08.746 "transport_retry_count": 4, 00:18:08.746 "bdev_retry_count": 3, 00:18:08.746 "transport_ack_timeout": 0, 00:18:08.746 "ctrlr_loss_timeout_sec": 0, 00:18:08.746 "reconnect_delay_sec": 0, 00:18:08.746 "fast_io_fail_timeout_sec": 0, 00:18:08.746 "disable_auto_failback": false, 00:18:08.746 "generate_uuids": false, 00:18:08.746 "transport_tos": 0, 00:18:08.746 "nvme_error_stat": false, 00:18:08.746 "rdma_srq_size": 0, 00:18:08.746 "io_path_stat": false, 00:18:08.746 "allow_accel_sequence": false, 00:18:08.746 "rdma_max_cq_size": 0, 00:18:08.746 "rdma_cm_event_timeout_ms": 0, 00:18:08.746 "dhchap_digests": [ 00:18:08.746 "sha256", 00:18:08.746 "sha384", 00:18:08.746 "sha512" 00:18:08.746 ], 00:18:08.746 "dhchap_dhgroups": [ 00:18:08.746 "null", 00:18:08.746 "ffdhe2048", 00:18:08.746 "ffdhe3072", 00:18:08.746 "ffdhe4096", 00:18:08.746 "ffdhe6144", 00:18:08.746 "ffdhe8192" 00:18:08.746 ] 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "bdev_nvme_set_hotplug", 00:18:08.746 "params": { 00:18:08.746 "period_us": 100000, 00:18:08.746 "enable": false 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "bdev_malloc_create", 00:18:08.746 "params": { 00:18:08.746 "name": "malloc0", 00:18:08.746 "num_blocks": 8192, 00:18:08.746 "block_size": 4096, 00:18:08.746 "physical_block_size": 4096, 00:18:08.746 "uuid": "fb0ad217-1062-453c-938f-2ec2cb54d092", 00:18:08.746 "optimal_io_boundary": 0, 00:18:08.746 "md_size": 0, 00:18:08.746 "dif_type": 0, 00:18:08.746 "dif_is_head_of_md": false, 00:18:08.746 "dif_pi_format": 0 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "bdev_wait_for_examine" 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "scsi", 00:18:08.746 "config": null 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "scheduler", 00:18:08.746 "config": [ 00:18:08.746 { 00:18:08.746 "method": "framework_set_scheduler", 00:18:08.746 "params": { 00:18:08.746 "name": "static" 00:18:08.746 } 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "vhost_scsi", 00:18:08.746 "config": [] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "vhost_blk", 00:18:08.746 "config": [] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "ublk", 00:18:08.746 "config": [ 00:18:08.746 { 00:18:08.746 "method": "ublk_create_target", 00:18:08.746 "params": { 00:18:08.746 "cpumask": "1" 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "ublk_start_disk", 00:18:08.746 "params": { 00:18:08.746 "bdev_name": "malloc0", 00:18:08.746 "ublk_id": 0, 00:18:08.746 "num_queues": 1, 00:18:08.746 "queue_depth": 128 00:18:08.746 } 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "nbd", 00:18:08.746 "config": [] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "nvmf", 00:18:08.746 "config": [ 00:18:08.746 { 00:18:08.746 "method": "nvmf_set_config", 00:18:08.746 "params": { 00:18:08.746 "discovery_filter": "match_any", 00:18:08.746 "admin_cmd_passthru": { 00:18:08.746 "identify_ctrlr": false 00:18:08.746 }, 00:18:08.746 "dhchap_digests": [ 00:18:08.746 "sha256", 00:18:08.746 "sha384", 00:18:08.746 "sha512" 00:18:08.746 ], 00:18:08.746 "dhchap_dhgroups": [ 00:18:08.746 "null", 00:18:08.746 "ffdhe2048", 00:18:08.746 "ffdhe3072", 00:18:08.746 "ffdhe4096", 00:18:08.746 "ffdhe6144", 00:18:08.746 "ffdhe8192" 00:18:08.746 ] 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "nvmf_set_max_subsystems", 00:18:08.746 "params": { 00:18:08.746 "max_subsystems": 1024 00:18:08.746 } 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "method": "nvmf_set_crdt", 00:18:08.746 "params": { 00:18:08.746 "crdt1": 0, 00:18:08.746 "crdt2": 0, 00:18:08.746 "crdt3": 0 00:18:08.746 } 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "subsystem": "iscsi", 00:18:08.746 "config": [ 00:18:08.746 { 00:18:08.746 "method": "iscsi_set_options", 00:18:08.746 "params": { 00:18:08.746 "node_base": "iqn.2016-06.io.spdk", 00:18:08.746 "max_sessions": 128, 00:18:08.746 "max_connections_per_session": 2, 00:18:08.746 "max_queue_depth": 64, 00:18:08.746 "default_time2wait": 2, 00:18:08.746 "default_time2retain": 20, 00:18:08.746 "first_burst_length": 8192, 00:18:08.746 "immediate_data": true, 00:18:08.746 "allow_duplicated_isid": false, 00:18:08.746 "error_recovery_level": 0, 00:18:08.746 "nop_timeout": 60, 00:18:08.746 "nop_in_interval": 30, 00:18:08.746 "disable_chap": false, 00:18:08.746 "require_chap": false, 00:18:08.746 "mutual_chap": false, 00:18:08.746 "chap_group": 0, 00:18:08.746 "max_large_datain_per_connection": 64, 00:18:08.746 "max_r2t_per_connection": 4, 00:18:08.746 "pdu_pool_size": 36864, 00:18:08.746 "immediate_data_pool_size": 16384, 00:18:08.746 "data_out_pool_size": 2048 00:18:08.746 } 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 }' 00:18:08.746 [2024-10-07 12:28:31.556618] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:18:08.746 [2024-10-07 12:28:31.557282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72868 ] 00:18:08.746 [2024-10-07 12:28:31.728091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.746 [2024-10-07 12:28:31.961393] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.681 [2024-10-07 12:28:32.950918] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:09.682 [2024-10-07 12:28:32.951959] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:09.682 [2024-10-07 12:28:32.957043] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:09.682 [2024-10-07 12:28:32.957128] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:09.682 [2024-10-07 12:28:32.957138] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:09.682 [2024-10-07 12:28:32.957146] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:09.682 [2024-10-07 12:28:32.968007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:09.682 [2024-10-07 12:28:32.968032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:09.940 [2024-10-07 12:28:32.974933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:09.940 [2024-10-07 12:28:32.975036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:09.940 [2024-10-07 12:28:32.991933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72868 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72868 ']' 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72868 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72868 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.941 killing process with pid 72868 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72868' 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72868 00:18:09.941 12:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72868 00:18:11.846 [2024-10-07 12:28:34.635071] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:11.846 [2024-10-07 12:28:34.674978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:11.846 [2024-10-07 12:28:34.675109] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:11.846 [2024-10-07 12:28:34.682935] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:11.846 [2024-10-07 12:28:34.682996] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:11.846 [2024-10-07 12:28:34.683005] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:11.846 [2024-10-07 12:28:34.683041] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:11.846 [2024-10-07 12:28:34.683183] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:13.753 12:28:36 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:13.753 00:18:13.753 real 0m10.483s 00:18:13.753 user 0m7.927s 00:18:13.753 sys 0m3.250s 00:18:13.753 12:28:36 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:13.753 12:28:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:13.753 ************************************ 00:18:13.753 END TEST test_save_ublk_config 00:18:13.753 ************************************ 00:18:13.753 12:28:36 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72955 00:18:13.753 12:28:36 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:13.753 12:28:36 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.753 12:28:36 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72955 00:18:13.753 12:28:36 ublk -- common/autotest_common.sh@831 -- # '[' -z 72955 ']' 00:18:13.753 12:28:36 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.753 12:28:36 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.753 12:28:36 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.753 12:28:36 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.753 12:28:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.753 [2024-10-07 12:28:36.796816] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:18:13.753 [2024-10-07 12:28:36.796951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72955 ] 00:18:13.753 [2024-10-07 12:28:36.969103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:14.012 [2024-10-07 12:28:37.186460] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.012 [2024-10-07 12:28:37.186496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.950 12:28:38 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:14.950 12:28:38 ublk -- common/autotest_common.sh@864 -- # return 0 00:18:14.950 12:28:38 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:14.950 12:28:38 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:14.950 12:28:38 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:14.950 12:28:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.950 ************************************ 00:18:14.950 START TEST test_create_ublk 00:18:14.950 ************************************ 00:18:14.950 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:18:14.950 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:14.950 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.950 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.950 [2024-10-07 12:28:38.098928] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:14.950 [2024-10-07 12:28:38.101239] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:14.950 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.950 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:14.950 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:14.950 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.950 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.210 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.210 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:15.210 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:15.210 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.210 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.210 [2024-10-07 12:28:38.389101] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:15.210 [2024-10-07 12:28:38.389552] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:15.210 [2024-10-07 12:28:38.389573] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:15.210 [2024-10-07 12:28:38.389582] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:15.210 [2024-10-07 12:28:38.397232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:15.210 [2024-10-07 12:28:38.397256] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:15.210 [2024-10-07 12:28:38.404934] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:15.210 [2024-10-07 12:28:38.405490] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:15.210 [2024-10-07 12:28:38.428957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:15.210 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.210 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:15.210 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:15.210 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:15.210 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.210 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.210 12:28:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.210 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:15.210 { 00:18:15.210 "ublk_device": "/dev/ublkb0", 00:18:15.210 "id": 0, 00:18:15.210 "queue_depth": 512, 00:18:15.210 "num_queues": 4, 00:18:15.210 "bdev_name": "Malloc0" 00:18:15.210 } 00:18:15.210 ]' 00:18:15.210 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:15.468 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:15.468 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:15.468 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:15.468 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:15.468 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:15.468 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:15.468 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:15.468 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:15.468 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:15.468 12:28:38 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:15.468 12:28:38 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:15.727 fio: verification read phase will never start because write phase uses all of runtime 00:18:15.727 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:15.727 fio-3.35 00:18:15.727 Starting 1 process 00:18:25.710 00:18:25.710 fio_test: (groupid=0, jobs=1): err= 0: pid=73011: Mon Oct 7 12:28:48 2024 00:18:25.710 write: IOPS=16.9k, BW=65.9MiB/s (69.1MB/s)(659MiB/10001msec); 0 zone resets 00:18:25.710 clat (usec): min=36, max=4044, avg=58.52, stdev=98.45 00:18:25.710 lat (usec): min=36, max=4044, avg=58.93, stdev=98.46 00:18:25.710 clat percentiles (usec): 00:18:25.710 | 1.00th=[ 40], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 52], 00:18:25.710 | 30.00th=[ 53], 40.00th=[ 53], 50.00th=[ 54], 60.00th=[ 55], 00:18:25.710 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 60], 95.00th=[ 63], 00:18:25.710 | 99.00th=[ 82], 99.50th=[ 147], 99.90th=[ 1909], 99.95th=[ 2868], 00:18:25.710 | 99.99th=[ 3621] 00:18:25.710 bw ( KiB/s): min=67257, max=77232, per=100.00%, avg=68338.37, stdev=2188.41, samples=19 00:18:25.710 iops : min=16814, max=19308, avg=17084.58, stdev=547.11, samples=19 00:18:25.710 lat (usec) : 50=6.76%, 100=92.35%, 250=0.69%, 500=0.02%, 750=0.01% 00:18:25.710 lat (usec) : 1000=0.01% 00:18:25.710 lat (msec) : 2=0.06%, 4=0.10%, 10=0.01% 00:18:25.710 cpu : usr=3.13%, sys=9.40%, ctx=168839, majf=0, minf=796 00:18:25.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.710 issued rwts: total=0,168829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.710 00:18:25.710 Run status group 0 (all jobs): 00:18:25.710 WRITE: bw=65.9MiB/s (69.1MB/s), 65.9MiB/s-65.9MiB/s (69.1MB/s-69.1MB/s), io=659MiB (692MB), run=10001-10001msec 00:18:25.710 00:18:25.710 Disk stats (read/write): 00:18:25.710 ublkb0: ios=0/168240, merge=0/0, ticks=0/8714, in_queue=8715, util=99.16% 00:18:25.710 12:28:48 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:25.710 12:28:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.710 12:28:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.710 [2024-10-07 12:28:48.956269] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:25.969 [2024-10-07 12:28:48.999010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:25.969 [2024-10-07 12:28:48.999946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:25.969 [2024-10-07 12:28:49.007139] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:25.969 [2024-10-07 12:28:49.008019] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:25.969 [2024-10-07 12:28:49.008115] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.969 12:28:49 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.969 [2024-10-07 12:28:49.031141] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:25.969 request: 00:18:25.969 { 00:18:25.969 "ublk_id": 0, 00:18:25.969 "method": "ublk_stop_disk", 00:18:25.969 "req_id": 1 00:18:25.969 } 00:18:25.969 Got JSON-RPC error response 00:18:25.969 response: 00:18:25.969 { 00:18:25.969 "code": -19, 00:18:25.969 "message": "No such device" 00:18:25.969 } 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.969 12:28:49 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.969 [2024-10-07 12:28:49.055068] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:25.969 [2024-10-07 12:28:49.062180] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:25.969 [2024-10-07 12:28:49.062226] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.969 12:28:49 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.969 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.538 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.538 12:28:49 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:26.538 12:28:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:26.538 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.538 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.538 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.538 12:28:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:26.538 12:28:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:26.797 12:28:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:26.797 12:28:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:26.797 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.797 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.797 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.797 12:28:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:26.797 12:28:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:26.797 12:28:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:26.797 00:18:26.797 real 0m11.805s 00:18:26.797 user 0m0.729s 00:18:26.797 sys 0m1.074s 00:18:26.797 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.797 12:28:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.797 ************************************ 00:18:26.797 END TEST test_create_ublk 00:18:26.797 ************************************ 00:18:26.797 12:28:49 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:26.797 12:28:49 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:26.797 12:28:49 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.797 12:28:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.797 ************************************ 00:18:26.797 START TEST test_create_multi_ublk 00:18:26.797 ************************************ 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.797 [2024-10-07 12:28:49.975925] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:26.797 [2024-10-07 12:28:49.977933] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.797 12:28:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.056 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.056 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:27.056 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:27.056 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.056 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.056 [2024-10-07 12:28:50.256063] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:27.056 [2024-10-07 12:28:50.256510] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:27.056 [2024-10-07 12:28:50.256528] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:27.056 [2024-10-07 12:28:50.256541] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:27.056 [2024-10-07 12:28:50.263962] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:27.056 [2024-10-07 12:28:50.263994] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:27.056 [2024-10-07 12:28:50.271932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:27.056 [2024-10-07 12:28:50.272492] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:27.056 [2024-10-07 12:28:50.283349] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:27.056 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.056 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:27.056 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:27.056 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:27.057 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.057 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.316 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.316 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:27.316 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:27.316 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.316 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.316 [2024-10-07 12:28:50.577062] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:27.316 [2024-10-07 12:28:50.577507] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:27.316 [2024-10-07 12:28:50.577528] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:27.316 [2024-10-07 12:28:50.577548] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:27.316 [2024-10-07 12:28:50.585237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:27.316 [2024-10-07 12:28:50.585262] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:27.316 [2024-10-07 12:28:50.592947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:27.316 [2024-10-07 12:28:50.593506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:27.316 [2024-10-07 12:28:50.599981] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:27.316 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.316 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:27.316 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:27.576 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:27.576 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.576 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.857 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.857 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:27.857 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:27.857 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.857 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:27.857 [2024-10-07 12:28:50.880076] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:27.857 [2024-10-07 12:28:50.880510] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:27.857 [2024-10-07 12:28:50.880531] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:27.857 [2024-10-07 12:28:50.880542] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:27.857 [2024-10-07 12:28:50.887951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:27.857 [2024-10-07 12:28:50.887982] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:27.857 [2024-10-07 12:28:50.895937] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:27.857 [2024-10-07 12:28:50.896523] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:27.857 [2024-10-07 12:28:50.904969] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:27.857 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.857 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:27.857 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:27.857 12:28:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:27.858 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.858 12:28:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.126 [2024-10-07 12:28:51.212064] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:28.126 [2024-10-07 12:28:51.212493] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:28.126 [2024-10-07 12:28:51.212512] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:28.126 [2024-10-07 12:28:51.212521] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:28.126 [2024-10-07 12:28:51.219952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:28.126 [2024-10-07 12:28:51.219979] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:28.126 [2024-10-07 12:28:51.227948] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:28.126 [2024-10-07 12:28:51.228497] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:28.126 [2024-10-07 12:28:51.234179] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:28.126 { 00:18:28.126 "ublk_device": "/dev/ublkb0", 00:18:28.126 "id": 0, 00:18:28.126 "queue_depth": 512, 00:18:28.126 "num_queues": 4, 00:18:28.126 "bdev_name": "Malloc0" 00:18:28.126 }, 00:18:28.126 { 00:18:28.126 "ublk_device": "/dev/ublkb1", 00:18:28.126 "id": 1, 00:18:28.126 "queue_depth": 512, 00:18:28.126 "num_queues": 4, 00:18:28.126 "bdev_name": "Malloc1" 00:18:28.126 }, 00:18:28.126 { 00:18:28.126 "ublk_device": "/dev/ublkb2", 00:18:28.126 "id": 2, 00:18:28.126 "queue_depth": 512, 00:18:28.126 "num_queues": 4, 00:18:28.126 "bdev_name": "Malloc2" 00:18:28.126 }, 00:18:28.126 { 00:18:28.126 "ublk_device": "/dev/ublkb3", 00:18:28.126 "id": 3, 00:18:28.126 "queue_depth": 512, 00:18:28.126 "num_queues": 4, 00:18:28.126 "bdev_name": "Malloc3" 00:18:28.126 } 00:18:28.126 ]' 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:28.126 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:28.386 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:28.645 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:28.645 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:28.645 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:28.645 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:28.646 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:28.646 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:28.646 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:28.646 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:28.646 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:28.646 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:28.646 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:28.646 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:28.646 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:28.905 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:28.905 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:28.905 12:28:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.905 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:28.905 [2024-10-07 12:28:52.165064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:29.165 [2024-10-07 12:28:52.204999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:29.165 [2024-10-07 12:28:52.205868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:29.165 [2024-10-07 12:28:52.213018] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:29.165 [2024-10-07 12:28:52.213306] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:29.165 [2024-10-07 12:28:52.213320] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:29.165 [2024-10-07 12:28:52.229006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:29.165 [2024-10-07 12:28:52.277373] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:29.165 [2024-10-07 12:28:52.278399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:29.165 [2024-10-07 12:28:52.287947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:29.165 [2024-10-07 12:28:52.288219] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:29.165 [2024-10-07 12:28:52.288233] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:29.165 [2024-10-07 12:28:52.304029] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:29.165 [2024-10-07 12:28:52.342366] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:29.165 [2024-10-07 12:28:52.343333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:29.165 [2024-10-07 12:28:52.349953] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:29.165 [2024-10-07 12:28:52.350206] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:29.165 [2024-10-07 12:28:52.350225] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:29.165 [2024-10-07 12:28:52.366022] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:29.165 [2024-10-07 12:28:52.399370] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:29.165 [2024-10-07 12:28:52.400285] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:29.165 [2024-10-07 12:28:52.412951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:29.165 [2024-10-07 12:28:52.413207] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:29.165 [2024-10-07 12:28:52.413220] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.165 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:29.424 [2024-10-07 12:28:52.614037] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:29.425 [2024-10-07 12:28:52.617146] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:29.425 [2024-10-07 12:28:52.617185] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:29.425 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:29.425 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:29.425 12:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:29.425 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.425 12:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.363 12:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.363 12:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:30.363 12:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:30.363 12:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.363 12:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.622 12:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.622 12:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:30.622 12:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:30.622 12:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.622 12:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.881 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.881 12:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:30.881 12:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:30.881 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.881 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:31.449 00:18:31.449 real 0m4.597s 00:18:31.449 user 0m1.043s 00:18:31.449 sys 0m0.239s 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:31.449 ************************************ 00:18:31.449 END TEST test_create_multi_ublk 00:18:31.449 12:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.449 ************************************ 00:18:31.449 12:28:54 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:31.449 12:28:54 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:31.449 12:28:54 ublk -- ublk/ublk.sh@130 -- # killprocess 72955 00:18:31.449 12:28:54 ublk -- common/autotest_common.sh@950 -- # '[' -z 72955 ']' 00:18:31.449 12:28:54 ublk -- common/autotest_common.sh@954 -- # kill -0 72955 00:18:31.449 12:28:54 ublk -- common/autotest_common.sh@955 -- # uname 00:18:31.449 12:28:54 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.449 12:28:54 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72955 00:18:31.449 12:28:54 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:31.449 12:28:54 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:31.449 killing process with pid 72955 00:18:31.449 12:28:54 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72955' 00:18:31.449 12:28:54 ublk -- common/autotest_common.sh@969 -- # kill 72955 00:18:31.449 12:28:54 ublk -- common/autotest_common.sh@974 -- # wait 72955 00:18:32.828 [2024-10-07 12:28:55.808394] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:32.828 [2024-10-07 12:28:55.808453] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:34.208 00:18:34.208 real 0m31.341s 00:18:34.208 user 0m44.583s 00:18:34.208 sys 0m10.468s 00:18:34.208 12:28:57 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.208 12:28:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:34.208 ************************************ 00:18:34.208 END TEST ublk 00:18:34.208 ************************************ 00:18:34.208 12:28:57 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:34.208 12:28:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:34.208 12:28:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.208 12:28:57 -- common/autotest_common.sh@10 -- # set +x 00:18:34.208 ************************************ 00:18:34.208 START TEST ublk_recovery 00:18:34.208 ************************************ 00:18:34.208 12:28:57 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:34.208 * Looking for test storage... 00:18:34.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:34.208 12:28:57 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:34.208 12:28:57 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:18:34.208 12:28:57 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:34.208 12:28:57 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:34.208 12:28:57 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.208 12:28:57 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.208 12:28:57 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.208 12:28:57 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.208 12:28:57 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.208 12:28:57 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.208 12:28:57 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.208 12:28:57 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.208 12:28:57 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.208 12:28:57 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.468 12:28:57 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:34.468 12:28:57 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.468 12:28:57 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:34.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.468 --rc genhtml_branch_coverage=1 00:18:34.468 --rc genhtml_function_coverage=1 00:18:34.468 --rc genhtml_legend=1 00:18:34.468 --rc geninfo_all_blocks=1 00:18:34.468 --rc geninfo_unexecuted_blocks=1 00:18:34.468 00:18:34.468 ' 00:18:34.468 12:28:57 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:34.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.468 --rc genhtml_branch_coverage=1 00:18:34.468 --rc genhtml_function_coverage=1 00:18:34.468 --rc genhtml_legend=1 00:18:34.468 --rc geninfo_all_blocks=1 00:18:34.468 --rc geninfo_unexecuted_blocks=1 00:18:34.468 00:18:34.468 ' 00:18:34.468 12:28:57 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:34.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.468 --rc genhtml_branch_coverage=1 00:18:34.468 --rc genhtml_function_coverage=1 00:18:34.468 --rc genhtml_legend=1 00:18:34.468 --rc geninfo_all_blocks=1 00:18:34.468 --rc geninfo_unexecuted_blocks=1 00:18:34.468 00:18:34.468 ' 00:18:34.468 12:28:57 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:34.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.468 --rc genhtml_branch_coverage=1 00:18:34.468 --rc genhtml_function_coverage=1 00:18:34.468 --rc genhtml_legend=1 00:18:34.468 --rc geninfo_all_blocks=1 00:18:34.469 --rc geninfo_unexecuted_blocks=1 00:18:34.469 00:18:34.469 ' 00:18:34.469 12:28:57 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:34.469 12:28:57 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:34.469 12:28:57 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:34.469 12:28:57 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:34.469 12:28:57 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:34.469 12:28:57 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:34.469 12:28:57 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:34.469 12:28:57 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:34.469 12:28:57 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:34.469 12:28:57 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:34.469 12:28:57 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73398 00:18:34.469 12:28:57 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:34.469 12:28:57 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.469 12:28:57 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73398 00:18:34.469 12:28:57 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73398 ']' 00:18:34.469 12:28:57 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.469 12:28:57 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.469 12:28:57 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.469 12:28:57 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.469 12:28:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.469 [2024-10-07 12:28:57.643508] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:18:34.469 [2024-10-07 12:28:57.643646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73398 ] 00:18:34.728 [2024-10-07 12:28:57.829099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:34.987 [2024-10-07 12:28:58.039369] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.987 [2024-10-07 12:28:58.039405] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.929 12:28:58 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.929 12:28:58 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:18:35.929 12:28:58 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:35.930 12:28:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.930 12:28:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.930 [2024-10-07 12:28:58.965922] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:35.930 [2024-10-07 12:28:58.967933] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:35.930 12:28:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.930 12:28:58 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:35.930 12:28:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.930 12:28:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.930 malloc0 00:18:35.930 12:28:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.930 12:28:59 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:35.930 12:28:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.930 12:28:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.930 [2024-10-07 12:28:59.131086] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:35.930 [2024-10-07 12:28:59.131203] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:35.930 [2024-10-07 12:28:59.131219] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:35.930 [2024-10-07 12:28:59.131228] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:35.930 [2024-10-07 12:28:59.139079] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:35.930 [2024-10-07 12:28:59.139105] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:35.930 [2024-10-07 12:28:59.146940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:35.930 [2024-10-07 12:28:59.147096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:35.930 [2024-10-07 12:28:59.161948] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:35.930 1 00:18:35.930 12:28:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.930 12:28:59 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:37.308 12:29:00 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73433 00:18:37.308 12:29:00 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:37.308 12:29:00 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:37.308 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:37.308 fio-3.35 00:18:37.308 Starting 1 process 00:18:42.610 12:29:05 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73398 00:18:42.610 12:29:05 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:47.885 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73398 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:47.885 12:29:10 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73546 00:18:47.885 12:29:10 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:47.885 12:29:10 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.885 12:29:10 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73546 00:18:47.885 12:29:10 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73546 ']' 00:18:47.885 12:29:10 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.885 12:29:10 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.885 12:29:10 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.885 12:29:10 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.885 12:29:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.885 [2024-10-07 12:29:10.296756] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:18:47.885 [2024-10-07 12:29:10.296894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73546 ] 00:18:47.885 [2024-10-07 12:29:10.469390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:47.885 [2024-10-07 12:29:10.712386] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.885 [2024-10-07 12:29:10.712416] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.454 12:29:11 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.454 12:29:11 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:18:48.454 12:29:11 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:48.454 12:29:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.454 12:29:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.454 [2024-10-07 12:29:11.685922] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:48.454 [2024-10-07 12:29:11.688282] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:48.454 12:29:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.454 12:29:11 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:48.454 12:29:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.454 12:29:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.712 malloc0 00:18:48.712 12:29:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.713 12:29:11 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:48.713 12:29:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.713 12:29:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.713 [2024-10-07 12:29:11.848077] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:48.713 [2024-10-07 12:29:11.848119] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:48.713 [2024-10-07 12:29:11.848131] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:48.713 [2024-10-07 12:29:11.855960] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:48.713 [2024-10-07 12:29:11.855984] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:48.713 1 00:18:48.713 12:29:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.713 12:29:11 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73433 00:18:49.650 [2024-10-07 12:29:12.854936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:49.650 [2024-10-07 12:29:12.862935] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:49.650 [2024-10-07 12:29:12.862955] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:50.588 [2024-10-07 12:29:13.863974] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:50.588 [2024-10-07 12:29:13.871968] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:50.588 [2024-10-07 12:29:13.871996] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:51.966 [2024-10-07 12:29:14.870415] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:51.966 [2024-10-07 12:29:14.873939] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:51.966 [2024-10-07 12:29:14.873953] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:51.966 [2024-10-07 12:29:14.873967] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:51.966 [2024-10-07 12:29:14.874076] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:19:13.905 [2024-10-07 12:29:35.572935] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:13.905 [2024-10-07 12:29:35.579567] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:13.905 [2024-10-07 12:29:35.587176] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:13.905 [2024-10-07 12:29:35.587198] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:40.483 00:19:40.483 fio_test: (groupid=0, jobs=1): err= 0: pid=73437: Mon Oct 7 12:30:00 2024 00:19:40.483 read: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(2734MiB/60002msec) 00:19:40.483 slat (nsec): min=1978, max=743195, avg=7812.64, stdev=2932.27 00:19:40.483 clat (usec): min=1277, max=30416k, avg=5828.50, stdev=310551.11 00:19:40.483 lat (usec): min=1291, max=30416k, avg=5836.31, stdev=310551.11 00:19:40.483 clat percentiles (usec): 00:19:40.483 | 1.00th=[ 1926], 5.00th=[ 2147], 10.00th=[ 2212], 00:19:40.483 | 20.00th=[ 2278], 30.00th=[ 2311], 40.00th=[ 2343], 00:19:40.483 | 50.00th=[ 2442], 60.00th=[ 2606], 70.00th=[ 2737], 00:19:40.483 | 80.00th=[ 2835], 90.00th=[ 3195], 95.00th=[ 4080], 00:19:40.483 | 99.00th=[ 5538], 99.50th=[ 6063], 99.90th=[ 8094], 00:19:40.483 | 99.95th=[ 8586], 99.99th=[17112761] 00:19:40.483 bw ( KiB/s): min=38072, max=106240, per=100.00%, avg=93575.56, stdev=13289.35, samples=59 00:19:40.483 iops : min= 9518, max=26560, avg=23393.88, stdev=3322.34, samples=59 00:19:40.483 write: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(2730MiB/60002msec); 0 zone resets 00:19:40.483 slat (usec): min=2, max=533, avg= 7.95, stdev= 2.74 00:19:40.483 clat (usec): min=1061, max=30416k, avg=5134.24, stdev=269779.26 00:19:40.483 lat (usec): min=1075, max=30416k, avg=5142.20, stdev=269779.26 00:19:40.483 clat percentiles (usec): 00:19:40.483 | 1.00th=[ 1926], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2343], 00:19:40.483 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2671], 00:19:40.483 | 70.00th=[ 2868], 80.00th=[ 2966], 90.00th=[ 3228], 95.00th=[ 4113], 00:19:40.483 | 99.00th=[ 5538], 99.50th=[ 6128], 99.90th=[ 8225], 99.95th=[ 8717], 00:19:40.483 | 99.99th=[13566] 00:19:40.483 bw ( KiB/s): min=38824, max=106136, per=100.00%, avg=93464.76, stdev=13116.21, samples=59 00:19:40.483 iops : min= 9706, max=26534, avg=23366.17, stdev=3279.05, samples=59 00:19:40.483 lat (msec) : 2=1.81%, 4=92.70%, 10=5.47%, 20=0.02%, >=2000=0.01% 00:19:40.483 cpu : usr=6.59%, sys=18.31%, ctx=60083, majf=0, minf=13 00:19:40.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:40.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:40.483 issued rwts: total=699991,698881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:40.483 00:19:40.483 Run status group 0 (all jobs): 00:19:40.483 READ: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=2734MiB (2867MB), run=60002-60002msec 00:19:40.483 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=2730MiB (2863MB), run=60002-60002msec 00:19:40.483 00:19:40.483 Disk stats (read/write): 00:19:40.483 ublkb1: ios=697563/696463, merge=0/0, ticks=4004506/3440316, in_queue=7444823, util=99.96% 00:19:40.483 12:30:00 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.483 [2024-10-07 12:30:00.455151] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:40.483 [2024-10-07 12:30:00.493029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:40.483 [2024-10-07 12:30:00.493195] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:40.483 [2024-10-07 12:30:00.498942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:40.483 [2024-10-07 12:30:00.499122] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:40.483 [2024-10-07 12:30:00.499136] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.483 12:30:00 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.483 [2024-10-07 12:30:00.514043] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:40.483 [2024-10-07 12:30:00.517209] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:40.483 [2024-10-07 12:30:00.517248] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.483 12:30:00 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:40.483 12:30:00 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:40.483 12:30:00 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73546 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 73546 ']' 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 73546 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73546 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73546' 00:19:40.483 killing process with pid 73546 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@969 -- # kill 73546 00:19:40.483 12:30:00 ublk_recovery -- common/autotest_common.sh@974 -- # wait 73546 00:19:40.483 [2024-10-07 12:30:02.175856] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:40.483 [2024-10-07 12:30:02.175918] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:40.483 00:19:40.483 real 1m6.460s 00:19:40.483 user 1m51.192s 00:19:40.483 sys 0m25.517s 00:19:40.483 ************************************ 00:19:40.483 END TEST ublk_recovery 00:19:40.483 ************************************ 00:19:40.483 12:30:03 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:40.483 12:30:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.743 12:30:03 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@256 -- # timing_exit lib 00:19:40.743 12:30:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.743 12:30:03 -- common/autotest_common.sh@10 -- # set +x 00:19:40.743 12:30:03 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:19:40.743 12:30:03 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:40.743 12:30:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:40.743 12:30:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:40.743 12:30:03 -- common/autotest_common.sh@10 -- # set +x 00:19:40.743 ************************************ 00:19:40.743 START TEST ftl 00:19:40.743 ************************************ 00:19:40.743 12:30:03 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:40.743 * Looking for test storage... 00:19:40.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:40.743 12:30:03 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:40.743 12:30:03 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:19:40.743 12:30:03 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:41.003 12:30:04 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:41.003 12:30:04 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.003 12:30:04 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.003 12:30:04 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.003 12:30:04 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.003 12:30:04 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.003 12:30:04 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.003 12:30:04 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.003 12:30:04 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.003 12:30:04 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.003 12:30:04 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.003 12:30:04 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.003 12:30:04 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:41.003 12:30:04 ftl -- scripts/common.sh@345 -- # : 1 00:19:41.003 12:30:04 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.003 12:30:04 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.003 12:30:04 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:41.003 12:30:04 ftl -- scripts/common.sh@353 -- # local d=1 00:19:41.003 12:30:04 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.003 12:30:04 ftl -- scripts/common.sh@355 -- # echo 1 00:19:41.003 12:30:04 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.003 12:30:04 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:41.003 12:30:04 ftl -- scripts/common.sh@353 -- # local d=2 00:19:41.003 12:30:04 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.003 12:30:04 ftl -- scripts/common.sh@355 -- # echo 2 00:19:41.003 12:30:04 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.003 12:30:04 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.003 12:30:04 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.003 12:30:04 ftl -- scripts/common.sh@368 -- # return 0 00:19:41.003 12:30:04 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.003 12:30:04 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:41.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.003 --rc genhtml_branch_coverage=1 00:19:41.003 --rc genhtml_function_coverage=1 00:19:41.003 --rc genhtml_legend=1 00:19:41.003 --rc geninfo_all_blocks=1 00:19:41.003 --rc geninfo_unexecuted_blocks=1 00:19:41.003 00:19:41.003 ' 00:19:41.003 12:30:04 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:41.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.003 --rc genhtml_branch_coverage=1 00:19:41.003 --rc genhtml_function_coverage=1 00:19:41.003 --rc genhtml_legend=1 00:19:41.003 --rc geninfo_all_blocks=1 00:19:41.003 --rc geninfo_unexecuted_blocks=1 00:19:41.003 00:19:41.003 ' 00:19:41.003 12:30:04 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:41.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.003 --rc genhtml_branch_coverage=1 00:19:41.003 --rc genhtml_function_coverage=1 00:19:41.003 --rc genhtml_legend=1 00:19:41.003 --rc geninfo_all_blocks=1 00:19:41.003 --rc geninfo_unexecuted_blocks=1 00:19:41.003 00:19:41.003 ' 00:19:41.003 12:30:04 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:41.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.003 --rc genhtml_branch_coverage=1 00:19:41.003 --rc genhtml_function_coverage=1 00:19:41.003 --rc genhtml_legend=1 00:19:41.003 --rc geninfo_all_blocks=1 00:19:41.003 --rc geninfo_unexecuted_blocks=1 00:19:41.003 00:19:41.003 ' 00:19:41.003 12:30:04 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:41.003 12:30:04 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:41.003 12:30:04 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:41.003 12:30:04 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:41.003 12:30:04 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:41.003 12:30:04 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:41.003 12:30:04 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:41.003 12:30:04 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:41.003 12:30:04 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:41.003 12:30:04 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.003 12:30:04 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.003 12:30:04 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:41.003 12:30:04 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:41.003 12:30:04 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:41.003 12:30:04 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:41.003 12:30:04 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:41.003 12:30:04 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:41.003 12:30:04 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.003 12:30:04 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.003 12:30:04 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:41.003 12:30:04 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:41.003 12:30:04 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:41.003 12:30:04 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:41.003 12:30:04 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:41.003 12:30:04 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:41.003 12:30:04 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:41.003 12:30:04 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:41.003 12:30:04 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:41.003 12:30:04 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:41.003 12:30:04 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:41.003 12:30:04 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:41.003 12:30:04 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:41.003 12:30:04 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:41.003 12:30:04 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:41.003 12:30:04 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:41.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.830 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:41.830 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:41.830 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:41.830 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:41.830 12:30:04 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:41.830 12:30:04 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74358 00:19:41.830 12:30:04 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74358 00:19:41.830 12:30:04 ftl -- common/autotest_common.sh@831 -- # '[' -z 74358 ']' 00:19:41.830 12:30:04 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.830 12:30:04 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.830 12:30:04 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.830 12:30:04 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.830 12:30:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:41.830 [2024-10-07 12:30:05.022315] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:41.830 [2024-10-07 12:30:05.022438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74358 ] 00:19:42.090 [2024-10-07 12:30:05.194493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.376 [2024-10-07 12:30:05.401816] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.635 12:30:05 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.635 12:30:05 ftl -- common/autotest_common.sh@864 -- # return 0 00:19:42.635 12:30:05 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:42.893 12:30:06 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:43.828 12:30:07 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:43.828 12:30:07 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:44.395 12:30:07 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:44.395 12:30:07 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:44.395 12:30:07 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:44.654 12:30:07 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:44.654 12:30:07 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:44.654 12:30:07 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:44.654 12:30:07 ftl -- ftl/ftl.sh@50 -- # break 00:19:44.654 12:30:07 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:44.654 12:30:07 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:44.654 12:30:07 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:44.654 12:30:07 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:44.913 12:30:07 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:44.913 12:30:07 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:44.913 12:30:07 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:44.913 12:30:07 ftl -- ftl/ftl.sh@63 -- # break 00:19:44.913 12:30:07 ftl -- ftl/ftl.sh@66 -- # killprocess 74358 00:19:44.913 12:30:07 ftl -- common/autotest_common.sh@950 -- # '[' -z 74358 ']' 00:19:44.913 12:30:07 ftl -- common/autotest_common.sh@954 -- # kill -0 74358 00:19:44.913 12:30:07 ftl -- common/autotest_common.sh@955 -- # uname 00:19:44.913 12:30:07 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.913 12:30:07 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74358 00:19:44.913 12:30:08 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:44.913 12:30:08 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:44.913 killing process with pid 74358 00:19:44.913 12:30:08 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74358' 00:19:44.913 12:30:08 ftl -- common/autotest_common.sh@969 -- # kill 74358 00:19:44.913 12:30:08 ftl -- common/autotest_common.sh@974 -- # wait 74358 00:19:47.450 12:30:10 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:47.450 12:30:10 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:47.450 12:30:10 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:47.450 12:30:10 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:47.450 12:30:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:47.450 ************************************ 00:19:47.450 START TEST ftl_fio_basic 00:19:47.450 ************************************ 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:47.450 * Looking for test storage... 00:19:47.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.450 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:47.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.710 --rc genhtml_branch_coverage=1 00:19:47.710 --rc genhtml_function_coverage=1 00:19:47.710 --rc genhtml_legend=1 00:19:47.710 --rc geninfo_all_blocks=1 00:19:47.710 --rc geninfo_unexecuted_blocks=1 00:19:47.710 00:19:47.710 ' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:47.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.710 --rc genhtml_branch_coverage=1 00:19:47.710 --rc genhtml_function_coverage=1 00:19:47.710 --rc genhtml_legend=1 00:19:47.710 --rc geninfo_all_blocks=1 00:19:47.710 --rc geninfo_unexecuted_blocks=1 00:19:47.710 00:19:47.710 ' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:47.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.710 --rc genhtml_branch_coverage=1 00:19:47.710 --rc genhtml_function_coverage=1 00:19:47.710 --rc genhtml_legend=1 00:19:47.710 --rc geninfo_all_blocks=1 00:19:47.710 --rc geninfo_unexecuted_blocks=1 00:19:47.710 00:19:47.710 ' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:47.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.710 --rc genhtml_branch_coverage=1 00:19:47.710 --rc genhtml_function_coverage=1 00:19:47.710 --rc genhtml_legend=1 00:19:47.710 --rc geninfo_all_blocks=1 00:19:47.710 --rc geninfo_unexecuted_blocks=1 00:19:47.710 00:19:47.710 ' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:47.710 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74514 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74514 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 74514 ']' 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:47.711 12:30:10 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:47.711 [2024-10-07 12:30:10.893725] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:47.711 [2024-10-07 12:30:10.894506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74514 ] 00:19:47.970 [2024-10-07 12:30:11.069193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:48.230 [2024-10-07 12:30:11.284509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.230 [2024-10-07 12:30:11.284640] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.230 [2024-10-07 12:30:11.284675] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:49.169 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:49.429 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:49.429 { 00:19:49.429 "name": "nvme0n1", 00:19:49.429 "aliases": [ 00:19:49.429 "cdfa6062-a766-4a7f-b63f-c6bb6162a13a" 00:19:49.429 ], 00:19:49.429 "product_name": "NVMe disk", 00:19:49.429 "block_size": 4096, 00:19:49.429 "num_blocks": 1310720, 00:19:49.429 "uuid": "cdfa6062-a766-4a7f-b63f-c6bb6162a13a", 00:19:49.429 "numa_id": -1, 00:19:49.429 "assigned_rate_limits": { 00:19:49.429 "rw_ios_per_sec": 0, 00:19:49.429 "rw_mbytes_per_sec": 0, 00:19:49.429 "r_mbytes_per_sec": 0, 00:19:49.429 "w_mbytes_per_sec": 0 00:19:49.429 }, 00:19:49.429 "claimed": false, 00:19:49.429 "zoned": false, 00:19:49.429 "supported_io_types": { 00:19:49.429 "read": true, 00:19:49.429 "write": true, 00:19:49.429 "unmap": true, 00:19:49.429 "flush": true, 00:19:49.429 "reset": true, 00:19:49.429 "nvme_admin": true, 00:19:49.429 "nvme_io": true, 00:19:49.429 "nvme_io_md": false, 00:19:49.429 "write_zeroes": true, 00:19:49.429 "zcopy": false, 00:19:49.429 "get_zone_info": false, 00:19:49.429 "zone_management": false, 00:19:49.429 "zone_append": false, 00:19:49.429 "compare": true, 00:19:49.429 "compare_and_write": false, 00:19:49.429 "abort": true, 00:19:49.429 "seek_hole": false, 00:19:49.429 "seek_data": false, 00:19:49.429 "copy": true, 00:19:49.429 "nvme_iov_md": false 00:19:49.429 }, 00:19:49.429 "driver_specific": { 00:19:49.429 "nvme": [ 00:19:49.429 { 00:19:49.429 "pci_address": "0000:00:11.0", 00:19:49.429 "trid": { 00:19:49.429 "trtype": "PCIe", 00:19:49.429 "traddr": "0000:00:11.0" 00:19:49.429 }, 00:19:49.429 "ctrlr_data": { 00:19:49.429 "cntlid": 0, 00:19:49.429 "vendor_id": "0x1b36", 00:19:49.429 "model_number": "QEMU NVMe Ctrl", 00:19:49.429 "serial_number": "12341", 00:19:49.429 "firmware_revision": "8.0.0", 00:19:49.429 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:49.429 "oacs": { 00:19:49.429 "security": 0, 00:19:49.429 "format": 1, 00:19:49.429 "firmware": 0, 00:19:49.429 "ns_manage": 1 00:19:49.429 }, 00:19:49.429 "multi_ctrlr": false, 00:19:49.429 "ana_reporting": false 00:19:49.429 }, 00:19:49.429 "vs": { 00:19:49.429 "nvme_version": "1.4" 00:19:49.429 }, 00:19:49.429 "ns_data": { 00:19:49.429 "id": 1, 00:19:49.429 "can_share": false 00:19:49.429 } 00:19:49.429 } 00:19:49.429 ], 00:19:49.429 "mp_policy": "active_passive" 00:19:49.429 } 00:19:49.429 } 00:19:49.429 ]' 00:19:49.429 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:49.429 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:49.429 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:49.429 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:49.429 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:49.429 12:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:19:49.689 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:49.689 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:49.689 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:49.689 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:49.689 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:49.689 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:49.689 12:30:12 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:49.948 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=8ca921fe-6b6b-4c21-bbad-ff55ec8a36e4 00:19:49.948 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8ca921fe-6b6b-4c21-bbad-ff55ec8a36e4 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:50.208 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:50.468 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:50.468 { 00:19:50.468 "name": "10a0a4ca-7016-4071-9096-440fae46f0c7", 00:19:50.468 "aliases": [ 00:19:50.468 "lvs/nvme0n1p0" 00:19:50.468 ], 00:19:50.468 "product_name": "Logical Volume", 00:19:50.468 "block_size": 4096, 00:19:50.468 "num_blocks": 26476544, 00:19:50.468 "uuid": "10a0a4ca-7016-4071-9096-440fae46f0c7", 00:19:50.468 "assigned_rate_limits": { 00:19:50.468 "rw_ios_per_sec": 0, 00:19:50.468 "rw_mbytes_per_sec": 0, 00:19:50.468 "r_mbytes_per_sec": 0, 00:19:50.468 "w_mbytes_per_sec": 0 00:19:50.468 }, 00:19:50.468 "claimed": false, 00:19:50.468 "zoned": false, 00:19:50.468 "supported_io_types": { 00:19:50.468 "read": true, 00:19:50.468 "write": true, 00:19:50.468 "unmap": true, 00:19:50.468 "flush": false, 00:19:50.468 "reset": true, 00:19:50.468 "nvme_admin": false, 00:19:50.468 "nvme_io": false, 00:19:50.468 "nvme_io_md": false, 00:19:50.468 "write_zeroes": true, 00:19:50.468 "zcopy": false, 00:19:50.468 "get_zone_info": false, 00:19:50.468 "zone_management": false, 00:19:50.468 "zone_append": false, 00:19:50.468 "compare": false, 00:19:50.468 "compare_and_write": false, 00:19:50.468 "abort": false, 00:19:50.468 "seek_hole": true, 00:19:50.468 "seek_data": true, 00:19:50.468 "copy": false, 00:19:50.468 "nvme_iov_md": false 00:19:50.468 }, 00:19:50.468 "driver_specific": { 00:19:50.468 "lvol": { 00:19:50.468 "lvol_store_uuid": "8ca921fe-6b6b-4c21-bbad-ff55ec8a36e4", 00:19:50.468 "base_bdev": "nvme0n1", 00:19:50.468 "thin_provision": true, 00:19:50.468 "num_allocated_clusters": 0, 00:19:50.468 "snapshot": false, 00:19:50.468 "clone": false, 00:19:50.468 "esnap_clone": false 00:19:50.468 } 00:19:50.468 } 00:19:50.468 } 00:19:50.468 ]' 00:19:50.468 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:50.468 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:50.468 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:50.468 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:50.468 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:50.468 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:50.468 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:50.468 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:50.468 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:50.727 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:50.727 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:50.727 12:30:13 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:50.727 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:50.727 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:50.727 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:50.728 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:50.728 12:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:50.986 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:50.986 { 00:19:50.986 "name": "10a0a4ca-7016-4071-9096-440fae46f0c7", 00:19:50.986 "aliases": [ 00:19:50.986 "lvs/nvme0n1p0" 00:19:50.986 ], 00:19:50.986 "product_name": "Logical Volume", 00:19:50.986 "block_size": 4096, 00:19:50.986 "num_blocks": 26476544, 00:19:50.986 "uuid": "10a0a4ca-7016-4071-9096-440fae46f0c7", 00:19:50.986 "assigned_rate_limits": { 00:19:50.986 "rw_ios_per_sec": 0, 00:19:50.986 "rw_mbytes_per_sec": 0, 00:19:50.987 "r_mbytes_per_sec": 0, 00:19:50.987 "w_mbytes_per_sec": 0 00:19:50.987 }, 00:19:50.987 "claimed": false, 00:19:50.987 "zoned": false, 00:19:50.987 "supported_io_types": { 00:19:50.987 "read": true, 00:19:50.987 "write": true, 00:19:50.987 "unmap": true, 00:19:50.987 "flush": false, 00:19:50.987 "reset": true, 00:19:50.987 "nvme_admin": false, 00:19:50.987 "nvme_io": false, 00:19:50.987 "nvme_io_md": false, 00:19:50.987 "write_zeroes": true, 00:19:50.987 "zcopy": false, 00:19:50.987 "get_zone_info": false, 00:19:50.987 "zone_management": false, 00:19:50.987 "zone_append": false, 00:19:50.987 "compare": false, 00:19:50.987 "compare_and_write": false, 00:19:50.987 "abort": false, 00:19:50.987 "seek_hole": true, 00:19:50.987 "seek_data": true, 00:19:50.987 "copy": false, 00:19:50.987 "nvme_iov_md": false 00:19:50.987 }, 00:19:50.987 "driver_specific": { 00:19:50.987 "lvol": { 00:19:50.987 "lvol_store_uuid": "8ca921fe-6b6b-4c21-bbad-ff55ec8a36e4", 00:19:50.987 "base_bdev": "nvme0n1", 00:19:50.987 "thin_provision": true, 00:19:50.987 "num_allocated_clusters": 0, 00:19:50.987 "snapshot": false, 00:19:50.987 "clone": false, 00:19:50.987 "esnap_clone": false 00:19:50.987 } 00:19:50.987 } 00:19:50.987 } 00:19:50.987 ]' 00:19:50.987 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:50.987 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:50.987 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:50.987 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:50.987 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:50.987 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:50.987 12:30:14 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:50.987 12:30:14 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:51.246 12:30:14 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:51.246 12:30:14 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:51.246 12:30:14 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:51.246 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:51.246 12:30:14 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:51.246 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:51.246 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:51.246 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:51.246 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:51.246 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 10a0a4ca-7016-4071-9096-440fae46f0c7 00:19:51.504 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:51.504 { 00:19:51.504 "name": "10a0a4ca-7016-4071-9096-440fae46f0c7", 00:19:51.504 "aliases": [ 00:19:51.504 "lvs/nvme0n1p0" 00:19:51.504 ], 00:19:51.504 "product_name": "Logical Volume", 00:19:51.504 "block_size": 4096, 00:19:51.504 "num_blocks": 26476544, 00:19:51.504 "uuid": "10a0a4ca-7016-4071-9096-440fae46f0c7", 00:19:51.504 "assigned_rate_limits": { 00:19:51.504 "rw_ios_per_sec": 0, 00:19:51.504 "rw_mbytes_per_sec": 0, 00:19:51.504 "r_mbytes_per_sec": 0, 00:19:51.504 "w_mbytes_per_sec": 0 00:19:51.504 }, 00:19:51.504 "claimed": false, 00:19:51.504 "zoned": false, 00:19:51.504 "supported_io_types": { 00:19:51.504 "read": true, 00:19:51.504 "write": true, 00:19:51.504 "unmap": true, 00:19:51.504 "flush": false, 00:19:51.504 "reset": true, 00:19:51.504 "nvme_admin": false, 00:19:51.504 "nvme_io": false, 00:19:51.504 "nvme_io_md": false, 00:19:51.504 "write_zeroes": true, 00:19:51.504 "zcopy": false, 00:19:51.504 "get_zone_info": false, 00:19:51.504 "zone_management": false, 00:19:51.504 "zone_append": false, 00:19:51.504 "compare": false, 00:19:51.504 "compare_and_write": false, 00:19:51.504 "abort": false, 00:19:51.504 "seek_hole": true, 00:19:51.504 "seek_data": true, 00:19:51.504 "copy": false, 00:19:51.504 "nvme_iov_md": false 00:19:51.504 }, 00:19:51.504 "driver_specific": { 00:19:51.504 "lvol": { 00:19:51.504 "lvol_store_uuid": "8ca921fe-6b6b-4c21-bbad-ff55ec8a36e4", 00:19:51.504 "base_bdev": "nvme0n1", 00:19:51.504 "thin_provision": true, 00:19:51.504 "num_allocated_clusters": 0, 00:19:51.504 "snapshot": false, 00:19:51.504 "clone": false, 00:19:51.504 "esnap_clone": false 00:19:51.504 } 00:19:51.504 } 00:19:51.504 } 00:19:51.504 ]' 00:19:51.504 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:51.504 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:51.504 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:51.504 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:51.504 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:51.504 12:30:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:51.505 12:30:14 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:51.505 12:30:14 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:51.505 12:30:14 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 10a0a4ca-7016-4071-9096-440fae46f0c7 -c nvc0n1p0 --l2p_dram_limit 60 00:19:51.764 [2024-10-07 12:30:14.912586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.764 [2024-10-07 12:30:14.913007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:51.764 [2024-10-07 12:30:14.913093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:51.765 [2024-10-07 12:30:14.913149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.913312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.765 [2024-10-07 12:30:14.913373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.765 [2024-10-07 12:30:14.913447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:51.765 [2024-10-07 12:30:14.913502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.913601] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:51.765 [2024-10-07 12:30:14.914748] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:51.765 [2024-10-07 12:30:14.915030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.765 [2024-10-07 12:30:14.915087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.765 [2024-10-07 12:30:14.915144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:19:51.765 [2024-10-07 12:30:14.915199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.915372] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fb727a5a-e79c-4a42-9522-170c4cdb256f 00:19:51.765 [2024-10-07 12:30:14.917003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.765 [2024-10-07 12:30:14.917196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:51.765 [2024-10-07 12:30:14.917270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:51.765 [2024-10-07 12:30:14.917330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.924934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.765 [2024-10-07 12:30:14.925145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.765 [2024-10-07 12:30:14.925239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.413 ms 00:19:51.765 [2024-10-07 12:30:14.925294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.925467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.765 [2024-10-07 12:30:14.925633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.765 [2024-10-07 12:30:14.925740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:51.765 [2024-10-07 12:30:14.925803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.926068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.765 [2024-10-07 12:30:14.926160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:51.765 [2024-10-07 12:30:14.926176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:51.765 [2024-10-07 12:30:14.926189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.926245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.765 [2024-10-07 12:30:14.931619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.765 [2024-10-07 12:30:14.931801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.765 [2024-10-07 12:30:14.932030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.386 ms 00:19:51.765 [2024-10-07 12:30:14.932195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.932346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.765 [2024-10-07 12:30:14.932494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:51.765 [2024-10-07 12:30:14.932623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:51.765 [2024-10-07 12:30:14.932703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.933025] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:51.765 [2024-10-07 12:30:14.933247] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:51.765 [2024-10-07 12:30:14.933418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:51.765 [2024-10-07 12:30:14.933540] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:51.765 [2024-10-07 12:30:14.933696] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:51.765 [2024-10-07 12:30:14.933759] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:51.765 [2024-10-07 12:30:14.933835] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:51.765 [2024-10-07 12:30:14.933960] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:51.765 [2024-10-07 12:30:14.934041] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:51.765 [2024-10-07 12:30:14.934098] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:51.765 [2024-10-07 12:30:14.934154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.765 [2024-10-07 12:30:14.934266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:51.765 [2024-10-07 12:30:14.934356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:19:51.765 [2024-10-07 12:30:14.934417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.934565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.765 [2024-10-07 12:30:14.934707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:51.765 [2024-10-07 12:30:14.934815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:51.765 [2024-10-07 12:30:14.934914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.765 [2024-10-07 12:30:14.935134] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:51.765 [2024-10-07 12:30:14.935285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:51.765 [2024-10-07 12:30:14.935383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:51.765 [2024-10-07 12:30:14.935521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.765 [2024-10-07 12:30:14.935609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:51.765 [2024-10-07 12:30:14.935686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:51.765 [2024-10-07 12:30:14.935738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:51.765 [2024-10-07 12:30:14.935812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:51.765 [2024-10-07 12:30:14.935874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:51.765 [2024-10-07 12:30:14.936020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:51.765 [2024-10-07 12:30:14.936099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:51.765 [2024-10-07 12:30:14.936148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:51.765 [2024-10-07 12:30:14.936198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:51.765 [2024-10-07 12:30:14.936319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:51.765 [2024-10-07 12:30:14.936402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:51.765 [2024-10-07 12:30:14.936460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.765 [2024-10-07 12:30:14.936509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:51.765 [2024-10-07 12:30:14.936557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:51.765 [2024-10-07 12:30:14.936687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.765 [2024-10-07 12:30:14.936709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:51.765 [2024-10-07 12:30:14.936723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:51.765 [2024-10-07 12:30:14.936733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.765 [2024-10-07 12:30:14.936745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:51.765 [2024-10-07 12:30:14.936755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:51.765 [2024-10-07 12:30:14.936767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.765 [2024-10-07 12:30:14.936776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:51.765 [2024-10-07 12:30:14.936788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:51.765 [2024-10-07 12:30:14.936797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.765 [2024-10-07 12:30:14.936809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:51.765 [2024-10-07 12:30:14.936818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:51.765 [2024-10-07 12:30:14.936830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.765 [2024-10-07 12:30:14.936839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:51.765 [2024-10-07 12:30:14.936853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:51.765 [2024-10-07 12:30:14.936863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:51.765 [2024-10-07 12:30:14.936874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:51.765 [2024-10-07 12:30:14.936884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:51.765 [2024-10-07 12:30:14.936895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:51.765 [2024-10-07 12:30:14.936915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:51.765 [2024-10-07 12:30:14.936928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:51.765 [2024-10-07 12:30:14.936954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.765 [2024-10-07 12:30:14.936966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:51.765 [2024-10-07 12:30:14.936975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:51.765 [2024-10-07 12:30:14.936987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.766 [2024-10-07 12:30:14.936996] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:51.766 [2024-10-07 12:30:14.937014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:51.766 [2024-10-07 12:30:14.937024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:51.766 [2024-10-07 12:30:14.937039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.766 [2024-10-07 12:30:14.937050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:51.766 [2024-10-07 12:30:14.937067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:51.766 [2024-10-07 12:30:14.937076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:51.766 [2024-10-07 12:30:14.937088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:51.766 [2024-10-07 12:30:14.937098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:51.766 [2024-10-07 12:30:14.937110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:51.766 [2024-10-07 12:30:14.937126] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:51.766 [2024-10-07 12:30:14.937143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:51.766 [2024-10-07 12:30:14.937154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:51.766 [2024-10-07 12:30:14.937167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:51.766 [2024-10-07 12:30:14.937178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:51.766 [2024-10-07 12:30:14.937191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:51.766 [2024-10-07 12:30:14.937201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:51.766 [2024-10-07 12:30:14.937214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:51.766 [2024-10-07 12:30:14.937225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:51.766 [2024-10-07 12:30:14.937237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:51.766 [2024-10-07 12:30:14.937247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:51.766 [2024-10-07 12:30:14.937263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:51.766 [2024-10-07 12:30:14.937273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:51.766 [2024-10-07 12:30:14.937286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:51.766 [2024-10-07 12:30:14.937297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:51.766 [2024-10-07 12:30:14.937310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:51.766 [2024-10-07 12:30:14.937320] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:51.766 [2024-10-07 12:30:14.937334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:51.766 [2024-10-07 12:30:14.937345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:51.766 [2024-10-07 12:30:14.937358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:51.766 [2024-10-07 12:30:14.937368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:51.766 [2024-10-07 12:30:14.937381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:51.766 [2024-10-07 12:30:14.937395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.766 [2024-10-07 12:30:14.937407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:51.766 [2024-10-07 12:30:14.937418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:19:51.766 [2024-10-07 12:30:14.937430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.766 [2024-10-07 12:30:14.937577] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:51.766 [2024-10-07 12:30:14.937605] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:57.037 [2024-10-07 12:30:20.141797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.037 [2024-10-07 12:30:20.142401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:57.037 [2024-10-07 12:30:20.142496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5212.677 ms 00:19:57.037 [2024-10-07 12:30:20.142555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.037 [2024-10-07 12:30:20.185600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.037 [2024-10-07 12:30:20.185988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.037 [2024-10-07 12:30:20.186516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.752 ms 00:19:57.037 [2024-10-07 12:30:20.186586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.037 [2024-10-07 12:30:20.186792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.037 [2024-10-07 12:30:20.186853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:57.037 [2024-10-07 12:30:20.186942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:57.037 [2024-10-07 12:30:20.186966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.037 [2024-10-07 12:30:20.230870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.037 [2024-10-07 12:30:20.230931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.037 [2024-10-07 12:30:20.230946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.868 ms 00:19:57.037 [2024-10-07 12:30:20.230959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.037 [2024-10-07 12:30:20.231054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.037 [2024-10-07 12:30:20.231071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.037 [2024-10-07 12:30:20.231085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:57.037 [2024-10-07 12:30:20.231098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.037 [2024-10-07 12:30:20.231603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.037 [2024-10-07 12:30:20.231627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.037 [2024-10-07 12:30:20.231638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:19:57.037 [2024-10-07 12:30:20.231650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.037 [2024-10-07 12:30:20.231790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.037 [2024-10-07 12:30:20.231807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.037 [2024-10-07 12:30:20.231818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:57.037 [2024-10-07 12:30:20.231836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.037 [2024-10-07 12:30:20.252241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.037 [2024-10-07 12:30:20.252425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.037 [2024-10-07 12:30:20.252449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.382 ms 00:19:57.037 [2024-10-07 12:30:20.252463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.037 [2024-10-07 12:30:20.265480] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:57.037 [2024-10-07 12:30:20.282082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.037 [2024-10-07 12:30:20.282126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.037 [2024-10-07 12:30:20.282145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.524 ms 00:19:57.037 [2024-10-07 12:30:20.282156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.296 [2024-10-07 12:30:20.443489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.296 [2024-10-07 12:30:20.443544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:57.296 [2024-10-07 12:30:20.443564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 161.524 ms 00:19:57.296 [2024-10-07 12:30:20.443575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.296 [2024-10-07 12:30:20.443793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.296 [2024-10-07 12:30:20.443809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.296 [2024-10-07 12:30:20.443830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:19:57.296 [2024-10-07 12:30:20.443840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.296 [2024-10-07 12:30:20.480267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.296 [2024-10-07 12:30:20.480307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:57.296 [2024-10-07 12:30:20.480325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.347 ms 00:19:57.296 [2024-10-07 12:30:20.480352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.296 [2024-10-07 12:30:20.516109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.296 [2024-10-07 12:30:20.516266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:57.296 [2024-10-07 12:30:20.516294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.768 ms 00:19:57.296 [2024-10-07 12:30:20.516304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.296 [2024-10-07 12:30:20.517095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.296 [2024-10-07 12:30:20.517114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:57.296 [2024-10-07 12:30:20.517128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:19:57.296 [2024-10-07 12:30:20.517138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.556 [2024-10-07 12:30:20.657505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.556 [2024-10-07 12:30:20.657551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:57.556 [2024-10-07 12:30:20.657574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 140.490 ms 00:19:57.556 [2024-10-07 12:30:20.657585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.556 [2024-10-07 12:30:20.694320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.556 [2024-10-07 12:30:20.694477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:57.556 [2024-10-07 12:30:20.694521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.631 ms 00:19:57.556 [2024-10-07 12:30:20.694533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.556 [2024-10-07 12:30:20.730195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.556 [2024-10-07 12:30:20.730231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:57.556 [2024-10-07 12:30:20.730247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.650 ms 00:19:57.556 [2024-10-07 12:30:20.730273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.556 [2024-10-07 12:30:20.766652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.556 [2024-10-07 12:30:20.766690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:57.556 [2024-10-07 12:30:20.766707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.370 ms 00:19:57.556 [2024-10-07 12:30:20.766717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.556 [2024-10-07 12:30:20.766785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.556 [2024-10-07 12:30:20.766797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:57.556 [2024-10-07 12:30:20.766815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:57.556 [2024-10-07 12:30:20.766825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.556 [2024-10-07 12:30:20.767023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.556 [2024-10-07 12:30:20.767038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:57.556 [2024-10-07 12:30:20.767052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:57.556 [2024-10-07 12:30:20.767065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.556 [2024-10-07 12:30:20.768216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5864.673 ms, result 0 00:19:57.556 { 00:19:57.556 "name": "ftl0", 00:19:57.556 "uuid": "fb727a5a-e79c-4a42-9522-170c4cdb256f" 00:19:57.556 } 00:19:57.556 12:30:20 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:57.556 12:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:19:57.556 12:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:57.556 12:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:19:57.556 12:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:57.556 12:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:57.556 12:30:20 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:57.815 12:30:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:58.074 [ 00:19:58.074 { 00:19:58.074 "name": "ftl0", 00:19:58.074 "aliases": [ 00:19:58.074 "fb727a5a-e79c-4a42-9522-170c4cdb256f" 00:19:58.074 ], 00:19:58.074 "product_name": "FTL disk", 00:19:58.074 "block_size": 4096, 00:19:58.074 "num_blocks": 20971520, 00:19:58.074 "uuid": "fb727a5a-e79c-4a42-9522-170c4cdb256f", 00:19:58.074 "assigned_rate_limits": { 00:19:58.074 "rw_ios_per_sec": 0, 00:19:58.074 "rw_mbytes_per_sec": 0, 00:19:58.074 "r_mbytes_per_sec": 0, 00:19:58.074 "w_mbytes_per_sec": 0 00:19:58.074 }, 00:19:58.074 "claimed": false, 00:19:58.074 "zoned": false, 00:19:58.074 "supported_io_types": { 00:19:58.074 "read": true, 00:19:58.074 "write": true, 00:19:58.074 "unmap": true, 00:19:58.074 "flush": true, 00:19:58.074 "reset": false, 00:19:58.074 "nvme_admin": false, 00:19:58.074 "nvme_io": false, 00:19:58.074 "nvme_io_md": false, 00:19:58.074 "write_zeroes": true, 00:19:58.074 "zcopy": false, 00:19:58.074 "get_zone_info": false, 00:19:58.074 "zone_management": false, 00:19:58.074 "zone_append": false, 00:19:58.074 "compare": false, 00:19:58.074 "compare_and_write": false, 00:19:58.074 "abort": false, 00:19:58.074 "seek_hole": false, 00:19:58.074 "seek_data": false, 00:19:58.074 "copy": false, 00:19:58.074 "nvme_iov_md": false 00:19:58.074 }, 00:19:58.074 "driver_specific": { 00:19:58.074 "ftl": { 00:19:58.074 "base_bdev": "10a0a4ca-7016-4071-9096-440fae46f0c7", 00:19:58.074 "cache": "nvc0n1p0" 00:19:58.074 } 00:19:58.074 } 00:19:58.074 } 00:19:58.074 ] 00:19:58.074 12:30:21 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:19:58.074 12:30:21 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:58.074 12:30:21 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:58.333 12:30:21 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:58.333 12:30:21 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:58.333 [2024-10-07 12:30:21.573143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.333 [2024-10-07 12:30:21.573358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:58.333 [2024-10-07 12:30:21.573388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:58.333 [2024-10-07 12:30:21.573406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.333 [2024-10-07 12:30:21.573479] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:58.333 [2024-10-07 12:30:21.577857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.333 [2024-10-07 12:30:21.577891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:58.333 [2024-10-07 12:30:21.577922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.360 ms 00:19:58.333 [2024-10-07 12:30:21.577936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.333 [2024-10-07 12:30:21.578733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.333 [2024-10-07 12:30:21.578762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:58.333 [2024-10-07 12:30:21.578777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:19:58.333 [2024-10-07 12:30:21.578787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.333 [2024-10-07 12:30:21.581331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.333 [2024-10-07 12:30:21.581353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:58.333 [2024-10-07 12:30:21.581367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.499 ms 00:19:58.333 [2024-10-07 12:30:21.581378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.334 [2024-10-07 12:30:21.586457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.334 [2024-10-07 12:30:21.586488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:58.334 [2024-10-07 12:30:21.586503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.038 ms 00:19:58.334 [2024-10-07 12:30:21.586512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.334 [2024-10-07 12:30:21.623486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.334 [2024-10-07 12:30:21.623524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:58.334 [2024-10-07 12:30:21.623541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.935 ms 00:19:58.334 [2024-10-07 12:30:21.623567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.628 [2024-10-07 12:30:21.647979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.628 [2024-10-07 12:30:21.648122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:58.628 [2024-10-07 12:30:21.648150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.386 ms 00:19:58.628 [2024-10-07 12:30:21.648160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.628 [2024-10-07 12:30:21.648498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.628 [2024-10-07 12:30:21.648512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:58.628 [2024-10-07 12:30:21.648527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:19:58.628 [2024-10-07 12:30:21.648540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.628 [2024-10-07 12:30:21.684934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.628 [2024-10-07 12:30:21.684971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:58.628 [2024-10-07 12:30:21.684987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.404 ms 00:19:58.628 [2024-10-07 12:30:21.685014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.628 [2024-10-07 12:30:21.721385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.628 [2024-10-07 12:30:21.721422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:58.628 [2024-10-07 12:30:21.721438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.366 ms 00:19:58.628 [2024-10-07 12:30:21.721464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.628 [2024-10-07 12:30:21.757133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.628 [2024-10-07 12:30:21.757277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:58.628 [2024-10-07 12:30:21.757303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.654 ms 00:19:58.628 [2024-10-07 12:30:21.757313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.628 [2024-10-07 12:30:21.793337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.628 [2024-10-07 12:30:21.793372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:58.628 [2024-10-07 12:30:21.793387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.889 ms 00:19:58.628 [2024-10-07 12:30:21.793413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.628 [2024-10-07 12:30:21.793477] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:58.628 [2024-10-07 12:30:21.793497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:58.628 [2024-10-07 12:30:21.793513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:58.628 [2024-10-07 12:30:21.793525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:58.628 [2024-10-07 12:30:21.793539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:58.628 [2024-10-07 12:30:21.793550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:58.628 [2024-10-07 12:30:21.793563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:58.628 [2024-10-07 12:30:21.793574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.793985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:58.629 [2024-10-07 12:30:21.794787] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:58.629 [2024-10-07 12:30:21.794802] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb727a5a-e79c-4a42-9522-170c4cdb256f 00:19:58.630 [2024-10-07 12:30:21.794813] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:58.630 [2024-10-07 12:30:21.794828] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:58.630 [2024-10-07 12:30:21.794838] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:58.630 [2024-10-07 12:30:21.794851] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:58.630 [2024-10-07 12:30:21.794860] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:58.630 [2024-10-07 12:30:21.794873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:58.630 [2024-10-07 12:30:21.794883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:58.630 [2024-10-07 12:30:21.794895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:58.630 [2024-10-07 12:30:21.794915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:58.630 [2024-10-07 12:30:21.794928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.630 [2024-10-07 12:30:21.794939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:58.630 [2024-10-07 12:30:21.794955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:19:58.630 [2024-10-07 12:30:21.794965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.630 [2024-10-07 12:30:21.815049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.630 [2024-10-07 12:30:21.815085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:58.630 [2024-10-07 12:30:21.815100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.023 ms 00:19:58.630 [2024-10-07 12:30:21.815126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.630 [2024-10-07 12:30:21.815702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.630 [2024-10-07 12:30:21.815717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:58.630 [2024-10-07 12:30:21.815730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:19:58.630 [2024-10-07 12:30:21.815740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.630 [2024-10-07 12:30:21.885150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.630 [2024-10-07 12:30:21.885187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.630 [2024-10-07 12:30:21.885203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.630 [2024-10-07 12:30:21.885214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.630 [2024-10-07 12:30:21.885306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.630 [2024-10-07 12:30:21.885317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.630 [2024-10-07 12:30:21.885330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.630 [2024-10-07 12:30:21.885339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.630 [2024-10-07 12:30:21.885476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.630 [2024-10-07 12:30:21.885491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.630 [2024-10-07 12:30:21.885504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.630 [2024-10-07 12:30:21.885514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.630 [2024-10-07 12:30:21.885565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.630 [2024-10-07 12:30:21.885575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.630 [2024-10-07 12:30:21.885592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.630 [2024-10-07 12:30:21.885602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.889 [2024-10-07 12:30:22.019681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.889 [2024-10-07 12:30:22.019728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.889 [2024-10-07 12:30:22.019746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.889 [2024-10-07 12:30:22.019756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.889 [2024-10-07 12:30:22.121155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.889 [2024-10-07 12:30:22.121345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.889 [2024-10-07 12:30:22.121372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.889 [2024-10-07 12:30:22.121383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.889 [2024-10-07 12:30:22.121552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.889 [2024-10-07 12:30:22.121564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.889 [2024-10-07 12:30:22.121578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.889 [2024-10-07 12:30:22.121588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.889 [2024-10-07 12:30:22.121717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.889 [2024-10-07 12:30:22.121730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.889 [2024-10-07 12:30:22.121743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.889 [2024-10-07 12:30:22.121757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.889 [2024-10-07 12:30:22.121966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.889 [2024-10-07 12:30:22.121981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.889 [2024-10-07 12:30:22.121995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.889 [2024-10-07 12:30:22.122005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.889 [2024-10-07 12:30:22.122091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.889 [2024-10-07 12:30:22.122105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:58.889 [2024-10-07 12:30:22.122118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.889 [2024-10-07 12:30:22.122132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.889 [2024-10-07 12:30:22.122228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.889 [2024-10-07 12:30:22.122240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.889 [2024-10-07 12:30:22.122253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.889 [2024-10-07 12:30:22.122263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.889 [2024-10-07 12:30:22.122359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.889 [2024-10-07 12:30:22.122372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.889 [2024-10-07 12:30:22.122385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.889 [2024-10-07 12:30:22.122397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.889 [2024-10-07 12:30:22.122694] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 550.404 ms, result 0 00:19:58.889 true 00:19:58.889 12:30:22 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74514 00:19:58.889 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 74514 ']' 00:19:58.889 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 74514 00:19:58.889 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:19:58.889 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.889 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74514 00:19:59.148 killing process with pid 74514 00:19:59.148 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:59.148 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:59.148 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74514' 00:19:59.148 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 74514 00:19:59.148 12:30:22 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 74514 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:02.434 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:02.434 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:02.434 fio-3.35 00:20:02.434 Starting 1 thread 00:20:09.000 00:20:09.000 test: (groupid=0, jobs=1): err= 0: pid=74751: Mon Oct 7 12:30:31 2024 00:20:09.000 read: IOPS=879, BW=58.4MiB/s (61.3MB/s)(255MiB/4357msec) 00:20:09.000 slat (nsec): min=4365, max=35170, avg=11614.46, stdev=3673.42 00:20:09.000 clat (usec): min=336, max=978, avg=509.64, stdev=55.50 00:20:09.000 lat (usec): min=349, max=989, avg=521.25, stdev=56.01 00:20:09.000 clat percentiles (usec): 00:20:09.000 | 1.00th=[ 392], 5.00th=[ 420], 10.00th=[ 433], 20.00th=[ 482], 00:20:09.000 | 30.00th=[ 494], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 510], 00:20:09.000 | 70.00th=[ 529], 80.00th=[ 562], 90.00th=[ 578], 95.00th=[ 586], 00:20:09.000 | 99.00th=[ 660], 99.50th=[ 685], 99.90th=[ 816], 99.95th=[ 889], 00:20:09.000 | 99.99th=[ 979] 00:20:09.000 write: IOPS=886, BW=58.8MiB/s (61.7MB/s)(256MiB/4352msec); 0 zone resets 00:20:09.000 slat (nsec): min=15901, max=65122, avg=24911.68, stdev=5490.91 00:20:09.000 clat (usec): min=362, max=1383, avg=575.03, stdev=73.64 00:20:09.000 lat (usec): min=387, max=1413, avg=599.94, stdev=73.87 00:20:09.000 clat percentiles (usec): 00:20:09.000 | 1.00th=[ 437], 5.00th=[ 478], 10.00th=[ 510], 20.00th=[ 523], 00:20:09.000 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 586], 00:20:09.000 | 70.00th=[ 594], 80.00th=[ 603], 90.00th=[ 619], 95.00th=[ 668], 00:20:09.000 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 1020], 99.95th=[ 1287], 00:20:09.000 | 99.99th=[ 1385] 00:20:09.000 bw ( KiB/s): min=59160, max=61608, per=100.00%, avg=60384.00, stdev=751.96, samples=8 00:20:09.000 iops : min= 870, max= 906, avg=888.00, stdev=11.06, samples=8 00:20:09.000 lat (usec) : 500=26.92%, 750=71.52%, 1000=1.50% 00:20:09.000 lat (msec) : 2=0.07% 00:20:09.000 cpu : usr=99.17%, sys=0.11%, ctx=9, majf=0, minf=1169 00:20:09.000 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.000 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.000 00:20:09.000 Run status group 0 (all jobs): 00:20:09.000 READ: bw=58.4MiB/s (61.3MB/s), 58.4MiB/s-58.4MiB/s (61.3MB/s-61.3MB/s), io=255MiB (267MB), run=4357-4357msec 00:20:09.000 WRITE: bw=58.8MiB/s (61.7MB/s), 58.8MiB/s-58.8MiB/s (61.7MB/s-61.7MB/s), io=256MiB (269MB), run=4352-4352msec 00:20:09.935 ----------------------------------------------------- 00:20:09.935 Suppressions used: 00:20:09.935 count bytes template 00:20:09.935 1 5 /usr/src/fio/parse.c 00:20:09.935 1 8 libtcmalloc_minimal.so 00:20:09.935 1 904 libcrypto.so 00:20:09.935 ----------------------------------------------------- 00:20:09.935 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:10.194 12:30:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:10.453 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:10.453 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:10.453 fio-3.35 00:20:10.453 Starting 2 threads 00:20:42.532 00:20:42.532 first_half: (groupid=0, jobs=1): err= 0: pid=74865: Mon Oct 7 12:31:02 2024 00:20:42.532 read: IOPS=2427, BW=9710KiB/s (9943kB/s)(255MiB/26876msec) 00:20:42.532 slat (usec): min=3, max=104, avg= 8.85, stdev= 3.48 00:20:42.532 clat (usec): min=940, max=335832, avg=40992.91, stdev=22424.90 00:20:42.532 lat (usec): min=949, max=335836, avg=41001.75, stdev=22425.31 00:20:42.532 clat percentiles (msec): 00:20:42.532 | 1.00th=[ 8], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 35], 00:20:42.532 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:20:42.532 | 70.00th=[ 38], 80.00th=[ 42], 90.00th=[ 47], 95.00th=[ 62], 00:20:42.532 | 99.00th=[ 169], 99.50th=[ 192], 99.90th=[ 224], 99.95th=[ 279], 00:20:42.532 | 99.99th=[ 326] 00:20:42.532 write: IOPS=3121, BW=12.2MiB/s (12.8MB/s)(256MiB/20995msec); 0 zone resets 00:20:42.532 slat (usec): min=4, max=1628, avg=10.72, stdev=14.05 00:20:42.532 clat (usec): min=449, max=111230, avg=11624.51, stdev=19506.39 00:20:42.532 lat (usec): min=459, max=111236, avg=11635.23, stdev=19506.58 00:20:42.532 clat percentiles (usec): 00:20:42.532 | 1.00th=[ 1057], 5.00th=[ 1385], 10.00th=[ 1598], 20.00th=[ 1909], 00:20:42.532 | 30.00th=[ 2769], 40.00th=[ 4686], 50.00th=[ 6718], 60.00th=[ 8160], 00:20:42.532 | 70.00th=[ 9634], 80.00th=[ 11338], 90.00th=[ 13829], 95.00th=[ 78119], 00:20:42.532 | 99.00th=[ 88605], 99.50th=[ 90702], 99.90th=[108528], 99.95th=[109577], 00:20:42.532 | 99.99th=[109577] 00:20:42.532 bw ( KiB/s): min= 1616, max=41248, per=100.00%, avg=24966.10, stdev=12344.15, samples=21 00:20:42.532 iops : min= 404, max=10312, avg=6241.52, stdev=3086.04, samples=21 00:20:42.532 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.31% 00:20:42.532 lat (msec) : 2=11.00%, 4=6.99%, 10=18.63%, 20=9.97%, 50=46.18% 00:20:42.532 lat (msec) : 100=5.36%, 250=1.48%, 500=0.04% 00:20:42.532 cpu : usr=99.18%, sys=0.21%, ctx=51, majf=0, minf=5559 00:20:42.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:42.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.532 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:42.532 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.532 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:42.532 second_half: (groupid=0, jobs=1): err= 0: pid=74866: Mon Oct 7 12:31:02 2024 00:20:42.532 read: IOPS=2407, BW=9630KiB/s (9861kB/s)(255MiB/27103msec) 00:20:42.532 slat (usec): min=3, max=123, avg= 9.03, stdev= 4.72 00:20:42.532 clat (usec): min=942, max=339753, avg=40278.26, stdev=24374.95 00:20:42.532 lat (usec): min=952, max=339770, avg=40287.29, stdev=24375.52 00:20:42.532 clat percentiles (msec): 00:20:42.532 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 34], 00:20:42.532 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:20:42.532 | 70.00th=[ 37], 80.00th=[ 42], 90.00th=[ 44], 95.00th=[ 57], 00:20:42.532 | 99.00th=[ 178], 99.50th=[ 203], 99.90th=[ 251], 99.95th=[ 259], 00:20:42.532 | 99.99th=[ 334] 00:20:42.532 write: IOPS=2810, BW=11.0MiB/s (11.5MB/s)(256MiB/23320msec); 0 zone resets 00:20:42.532 slat (usec): min=4, max=1448, avg=12.85, stdev=10.88 00:20:42.532 clat (usec): min=439, max=111962, avg=12792.52, stdev=20770.43 00:20:42.532 lat (usec): min=455, max=111980, avg=12805.37, stdev=20771.59 00:20:42.532 clat percentiles (usec): 00:20:42.532 | 1.00th=[ 996], 5.00th=[ 1270], 10.00th=[ 1500], 20.00th=[ 1844], 00:20:42.532 | 30.00th=[ 2507], 40.00th=[ 4948], 50.00th=[ 6456], 60.00th=[ 7570], 00:20:42.532 | 70.00th=[ 9241], 80.00th=[ 11863], 90.00th=[ 37487], 95.00th=[ 79168], 00:20:42.532 | 99.00th=[ 89654], 99.50th=[ 90702], 99.90th=[109577], 99.95th=[110625], 00:20:42.532 | 99.99th=[111674] 00:20:42.532 bw ( KiB/s): min= 984, max=46704, per=86.38%, avg=19421.00, stdev=13401.56, samples=27 00:20:42.532 iops : min= 246, max=11676, avg=4855.22, stdev=3350.35, samples=27 00:20:42.532 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.48% 00:20:42.532 lat (msec) : 2=11.48%, 4=6.55%, 10=19.58%, 20=8.06%, 50=47.37% 00:20:42.532 lat (msec) : 100=4.78%, 250=1.59%, 500=0.06% 00:20:42.532 cpu : usr=99.14%, sys=0.26%, ctx=68, majf=0, minf=5556 00:20:42.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:42.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.532 complete : 0=0.0%, 4=99.4%, 8=0.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:42.532 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.532 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:42.532 00:20:42.532 Run status group 0 (all jobs): 00:20:42.533 READ: bw=18.8MiB/s (19.7MB/s), 9630KiB/s-9710KiB/s (9861kB/s-9943kB/s), io=510MiB (534MB), run=26876-27103msec 00:20:42.533 WRITE: bw=22.0MiB/s (23.0MB/s), 11.0MiB/s-12.2MiB/s (11.5MB/s-12.8MB/s), io=512MiB (537MB), run=20995-23320msec 00:20:42.533 ----------------------------------------------------- 00:20:42.533 Suppressions used: 00:20:42.533 count bytes template 00:20:42.533 2 10 /usr/src/fio/parse.c 00:20:42.533 2 192 /usr/src/fio/iolog.c 00:20:42.533 1 8 libtcmalloc_minimal.so 00:20:42.533 1 904 libcrypto.so 00:20:42.533 ----------------------------------------------------- 00:20:42.533 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:42.533 12:31:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:42.533 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:42.533 fio-3.35 00:20:42.533 Starting 1 thread 00:20:57.409 00:20:57.409 test: (groupid=0, jobs=1): err= 0: pid=75214: Mon Oct 7 12:31:19 2024 00:20:57.409 read: IOPS=7550, BW=29.5MiB/s (30.9MB/s)(255MiB/8636msec) 00:20:57.409 slat (usec): min=3, max=333, avg= 6.63, stdev= 3.90 00:20:57.409 clat (usec): min=617, max=35837, avg=16941.88, stdev=1321.36 00:20:57.409 lat (usec): min=627, max=35847, avg=16948.51, stdev=1322.73 00:20:57.409 clat percentiles (usec): 00:20:57.409 | 1.00th=[15139], 5.00th=[15401], 10.00th=[15533], 20.00th=[15795], 00:20:57.409 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16581], 60.00th=[17433], 00:20:57.409 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18482], 95.00th=[18744], 00:20:57.409 | 99.00th=[19268], 99.50th=[19530], 99.90th=[26608], 99.95th=[31327], 00:20:57.409 | 99.99th=[34866] 00:20:57.409 write: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(256MiB/4648msec); 0 zone resets 00:20:57.409 slat (usec): min=4, max=713, avg= 8.05, stdev= 7.68 00:20:57.409 clat (usec): min=549, max=51973, avg=9031.24, stdev=11137.21 00:20:57.409 lat (usec): min=557, max=51980, avg=9039.29, stdev=11137.24 00:20:57.409 clat percentiles (usec): 00:20:57.409 | 1.00th=[ 889], 5.00th=[ 1057], 10.00th=[ 1172], 20.00th=[ 1385], 00:20:57.409 | 30.00th=[ 1565], 40.00th=[ 1893], 50.00th=[ 5866], 60.00th=[ 6915], 00:20:57.409 | 70.00th=[ 7832], 80.00th=[ 9634], 90.00th=[33162], 95.00th=[34866], 00:20:57.409 | 99.00th=[36963], 99.50th=[38011], 99.90th=[40109], 99.95th=[41681], 00:20:57.409 | 99.99th=[46924] 00:20:57.409 bw ( KiB/s): min=13496, max=78904, per=92.96%, avg=52428.80, stdev=16975.96, samples=10 00:20:57.409 iops : min= 3374, max=19726, avg=13107.20, stdev=4243.99, samples=10 00:20:57.409 lat (usec) : 750=0.07%, 1000=1.48% 00:20:57.409 lat (msec) : 2=18.90%, 4=0.70%, 10=19.77%, 20=51.00%, 50=8.07% 00:20:57.409 lat (msec) : 100=0.01% 00:20:57.409 cpu : usr=98.19%, sys=0.55%, ctx=41, majf=0, minf=5565 00:20:57.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:57.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.409 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.409 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.409 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.409 00:20:57.409 Run status group 0 (all jobs): 00:20:57.409 READ: bw=29.5MiB/s (30.9MB/s), 29.5MiB/s-29.5MiB/s (30.9MB/s-30.9MB/s), io=255MiB (267MB), run=8636-8636msec 00:20:57.409 WRITE: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=256MiB (268MB), run=4648-4648msec 00:20:58.788 ----------------------------------------------------- 00:20:58.788 Suppressions used: 00:20:58.788 count bytes template 00:20:58.788 1 5 /usr/src/fio/parse.c 00:20:58.788 2 192 /usr/src/fio/iolog.c 00:20:58.788 1 8 libtcmalloc_minimal.so 00:20:58.788 1 904 libcrypto.so 00:20:58.788 ----------------------------------------------------- 00:20:58.788 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:58.788 Remove shared memory files 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58088 /dev/shm/spdk_tgt_trace.pid73398 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:58.788 00:20:58.788 real 1m11.436s 00:20:58.788 user 2m34.884s 00:20:58.788 sys 0m4.247s 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:58.788 12:31:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:58.788 ************************************ 00:20:58.788 END TEST ftl_fio_basic 00:20:58.788 ************************************ 00:20:58.788 12:31:22 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:58.788 12:31:22 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:58.788 12:31:22 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:58.788 12:31:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:58.788 ************************************ 00:20:58.788 START TEST ftl_bdevperf 00:20:58.788 ************************************ 00:20:58.788 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:59.048 * Looking for test storage... 00:20:59.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:59.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.048 --rc genhtml_branch_coverage=1 00:20:59.048 --rc genhtml_function_coverage=1 00:20:59.048 --rc genhtml_legend=1 00:20:59.048 --rc geninfo_all_blocks=1 00:20:59.048 --rc geninfo_unexecuted_blocks=1 00:20:59.048 00:20:59.048 ' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:59.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.048 --rc genhtml_branch_coverage=1 00:20:59.048 --rc genhtml_function_coverage=1 00:20:59.048 --rc genhtml_legend=1 00:20:59.048 --rc geninfo_all_blocks=1 00:20:59.048 --rc geninfo_unexecuted_blocks=1 00:20:59.048 00:20:59.048 ' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:59.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.048 --rc genhtml_branch_coverage=1 00:20:59.048 --rc genhtml_function_coverage=1 00:20:59.048 --rc genhtml_legend=1 00:20:59.048 --rc geninfo_all_blocks=1 00:20:59.048 --rc geninfo_unexecuted_blocks=1 00:20:59.048 00:20:59.048 ' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:59.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.048 --rc genhtml_branch_coverage=1 00:20:59.048 --rc genhtml_function_coverage=1 00:20:59.048 --rc genhtml_legend=1 00:20:59.048 --rc geninfo_all_blocks=1 00:20:59.048 --rc geninfo_unexecuted_blocks=1 00:20:59.048 00:20:59.048 ' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:59.048 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:59.049 12:31:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:59.308 12:31:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75458 00:20:59.308 12:31:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:59.308 12:31:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:59.308 12:31:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75458 00:20:59.308 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 75458 ']' 00:20:59.308 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.308 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.308 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.308 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.308 12:31:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:59.308 [2024-10-07 12:31:22.435396] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:20:59.308 [2024-10-07 12:31:22.435524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75458 ] 00:20:59.567 [2024-10-07 12:31:22.606747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.826 [2024-10-07 12:31:22.859343] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.085 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.085 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:21:00.085 12:31:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:00.085 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:00.085 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:00.085 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:00.085 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:00.085 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:00.344 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:00.344 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:00.344 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:00.344 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:00.344 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:00.344 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:00.344 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:00.344 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:00.603 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:00.603 { 00:21:00.603 "name": "nvme0n1", 00:21:00.603 "aliases": [ 00:21:00.603 "c0049f53-05b9-4377-bd5c-203243891cb7" 00:21:00.603 ], 00:21:00.603 "product_name": "NVMe disk", 00:21:00.603 "block_size": 4096, 00:21:00.603 "num_blocks": 1310720, 00:21:00.603 "uuid": "c0049f53-05b9-4377-bd5c-203243891cb7", 00:21:00.603 "numa_id": -1, 00:21:00.603 "assigned_rate_limits": { 00:21:00.603 "rw_ios_per_sec": 0, 00:21:00.603 "rw_mbytes_per_sec": 0, 00:21:00.603 "r_mbytes_per_sec": 0, 00:21:00.603 "w_mbytes_per_sec": 0 00:21:00.603 }, 00:21:00.603 "claimed": true, 00:21:00.603 "claim_type": "read_many_write_one", 00:21:00.603 "zoned": false, 00:21:00.603 "supported_io_types": { 00:21:00.603 "read": true, 00:21:00.603 "write": true, 00:21:00.603 "unmap": true, 00:21:00.603 "flush": true, 00:21:00.603 "reset": true, 00:21:00.603 "nvme_admin": true, 00:21:00.603 "nvme_io": true, 00:21:00.603 "nvme_io_md": false, 00:21:00.603 "write_zeroes": true, 00:21:00.603 "zcopy": false, 00:21:00.603 "get_zone_info": false, 00:21:00.603 "zone_management": false, 00:21:00.603 "zone_append": false, 00:21:00.603 "compare": true, 00:21:00.603 "compare_and_write": false, 00:21:00.603 "abort": true, 00:21:00.603 "seek_hole": false, 00:21:00.603 "seek_data": false, 00:21:00.603 "copy": true, 00:21:00.603 "nvme_iov_md": false 00:21:00.603 }, 00:21:00.603 "driver_specific": { 00:21:00.603 "nvme": [ 00:21:00.603 { 00:21:00.603 "pci_address": "0000:00:11.0", 00:21:00.603 "trid": { 00:21:00.603 "trtype": "PCIe", 00:21:00.603 "traddr": "0000:00:11.0" 00:21:00.603 }, 00:21:00.603 "ctrlr_data": { 00:21:00.603 "cntlid": 0, 00:21:00.603 "vendor_id": "0x1b36", 00:21:00.603 "model_number": "QEMU NVMe Ctrl", 00:21:00.603 "serial_number": "12341", 00:21:00.603 "firmware_revision": "8.0.0", 00:21:00.603 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:00.603 "oacs": { 00:21:00.603 "security": 0, 00:21:00.603 "format": 1, 00:21:00.603 "firmware": 0, 00:21:00.603 "ns_manage": 1 00:21:00.603 }, 00:21:00.603 "multi_ctrlr": false, 00:21:00.603 "ana_reporting": false 00:21:00.603 }, 00:21:00.603 "vs": { 00:21:00.603 "nvme_version": "1.4" 00:21:00.603 }, 00:21:00.603 "ns_data": { 00:21:00.603 "id": 1, 00:21:00.603 "can_share": false 00:21:00.603 } 00:21:00.604 } 00:21:00.604 ], 00:21:00.604 "mp_policy": "active_passive" 00:21:00.604 } 00:21:00.604 } 00:21:00.604 ]' 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:00.604 12:31:23 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:00.863 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=8ca921fe-6b6b-4c21-bbad-ff55ec8a36e4 00:21:00.863 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:00.863 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ca921fe-6b6b-4c21-bbad-ff55ec8a36e4 00:21:01.122 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:01.381 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=eca75305-0086-4562-a699-0d2cab7ebed6 00:21:01.381 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eca75305-0086-4562-a699-0d2cab7ebed6 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:01.640 12:31:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:01.640 { 00:21:01.640 "name": "ec09bb56-3254-41bb-8837-e0f04b013f80", 00:21:01.640 "aliases": [ 00:21:01.640 "lvs/nvme0n1p0" 00:21:01.640 ], 00:21:01.640 "product_name": "Logical Volume", 00:21:01.640 "block_size": 4096, 00:21:01.640 "num_blocks": 26476544, 00:21:01.640 "uuid": "ec09bb56-3254-41bb-8837-e0f04b013f80", 00:21:01.640 "assigned_rate_limits": { 00:21:01.640 "rw_ios_per_sec": 0, 00:21:01.640 "rw_mbytes_per_sec": 0, 00:21:01.640 "r_mbytes_per_sec": 0, 00:21:01.640 "w_mbytes_per_sec": 0 00:21:01.640 }, 00:21:01.640 "claimed": false, 00:21:01.640 "zoned": false, 00:21:01.640 "supported_io_types": { 00:21:01.640 "read": true, 00:21:01.641 "write": true, 00:21:01.641 "unmap": true, 00:21:01.641 "flush": false, 00:21:01.641 "reset": true, 00:21:01.641 "nvme_admin": false, 00:21:01.641 "nvme_io": false, 00:21:01.641 "nvme_io_md": false, 00:21:01.641 "write_zeroes": true, 00:21:01.641 "zcopy": false, 00:21:01.641 "get_zone_info": false, 00:21:01.641 "zone_management": false, 00:21:01.641 "zone_append": false, 00:21:01.641 "compare": false, 00:21:01.641 "compare_and_write": false, 00:21:01.641 "abort": false, 00:21:01.641 "seek_hole": true, 00:21:01.641 "seek_data": true, 00:21:01.641 "copy": false, 00:21:01.641 "nvme_iov_md": false 00:21:01.641 }, 00:21:01.641 "driver_specific": { 00:21:01.641 "lvol": { 00:21:01.641 "lvol_store_uuid": "eca75305-0086-4562-a699-0d2cab7ebed6", 00:21:01.641 "base_bdev": "nvme0n1", 00:21:01.641 "thin_provision": true, 00:21:01.641 "num_allocated_clusters": 0, 00:21:01.641 "snapshot": false, 00:21:01.641 "clone": false, 00:21:01.641 "esnap_clone": false 00:21:01.641 } 00:21:01.641 } 00:21:01.641 } 00:21:01.641 ]' 00:21:01.641 12:31:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:01.900 12:31:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:01.900 12:31:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:01.900 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:01.900 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:01.900 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:01.900 12:31:25 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:01.900 12:31:25 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:01.900 12:31:25 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:02.159 12:31:25 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:02.159 12:31:25 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:02.159 12:31:25 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:02.159 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:02.159 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:02.159 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:02.159 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:02.159 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:02.418 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:02.418 { 00:21:02.418 "name": "ec09bb56-3254-41bb-8837-e0f04b013f80", 00:21:02.418 "aliases": [ 00:21:02.418 "lvs/nvme0n1p0" 00:21:02.418 ], 00:21:02.418 "product_name": "Logical Volume", 00:21:02.418 "block_size": 4096, 00:21:02.418 "num_blocks": 26476544, 00:21:02.418 "uuid": "ec09bb56-3254-41bb-8837-e0f04b013f80", 00:21:02.418 "assigned_rate_limits": { 00:21:02.418 "rw_ios_per_sec": 0, 00:21:02.418 "rw_mbytes_per_sec": 0, 00:21:02.418 "r_mbytes_per_sec": 0, 00:21:02.418 "w_mbytes_per_sec": 0 00:21:02.418 }, 00:21:02.418 "claimed": false, 00:21:02.418 "zoned": false, 00:21:02.419 "supported_io_types": { 00:21:02.419 "read": true, 00:21:02.419 "write": true, 00:21:02.419 "unmap": true, 00:21:02.419 "flush": false, 00:21:02.419 "reset": true, 00:21:02.419 "nvme_admin": false, 00:21:02.419 "nvme_io": false, 00:21:02.419 "nvme_io_md": false, 00:21:02.419 "write_zeroes": true, 00:21:02.419 "zcopy": false, 00:21:02.419 "get_zone_info": false, 00:21:02.419 "zone_management": false, 00:21:02.419 "zone_append": false, 00:21:02.419 "compare": false, 00:21:02.419 "compare_and_write": false, 00:21:02.419 "abort": false, 00:21:02.419 "seek_hole": true, 00:21:02.419 "seek_data": true, 00:21:02.419 "copy": false, 00:21:02.419 "nvme_iov_md": false 00:21:02.419 }, 00:21:02.419 "driver_specific": { 00:21:02.419 "lvol": { 00:21:02.419 "lvol_store_uuid": "eca75305-0086-4562-a699-0d2cab7ebed6", 00:21:02.419 "base_bdev": "nvme0n1", 00:21:02.419 "thin_provision": true, 00:21:02.419 "num_allocated_clusters": 0, 00:21:02.419 "snapshot": false, 00:21:02.419 "clone": false, 00:21:02.419 "esnap_clone": false 00:21:02.419 } 00:21:02.419 } 00:21:02.419 } 00:21:02.419 ]' 00:21:02.419 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:02.419 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:02.419 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:02.419 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:02.419 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:02.419 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:02.419 12:31:25 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:02.419 12:31:25 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:02.677 12:31:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:21:02.677 12:31:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:02.677 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:02.677 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:02.677 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:02.677 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:02.677 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec09bb56-3254-41bb-8837-e0f04b013f80 00:21:02.935 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:02.935 { 00:21:02.935 "name": "ec09bb56-3254-41bb-8837-e0f04b013f80", 00:21:02.935 "aliases": [ 00:21:02.935 "lvs/nvme0n1p0" 00:21:02.935 ], 00:21:02.935 "product_name": "Logical Volume", 00:21:02.935 "block_size": 4096, 00:21:02.935 "num_blocks": 26476544, 00:21:02.935 "uuid": "ec09bb56-3254-41bb-8837-e0f04b013f80", 00:21:02.935 "assigned_rate_limits": { 00:21:02.935 "rw_ios_per_sec": 0, 00:21:02.935 "rw_mbytes_per_sec": 0, 00:21:02.935 "r_mbytes_per_sec": 0, 00:21:02.935 "w_mbytes_per_sec": 0 00:21:02.935 }, 00:21:02.935 "claimed": false, 00:21:02.935 "zoned": false, 00:21:02.935 "supported_io_types": { 00:21:02.935 "read": true, 00:21:02.935 "write": true, 00:21:02.935 "unmap": true, 00:21:02.935 "flush": false, 00:21:02.935 "reset": true, 00:21:02.935 "nvme_admin": false, 00:21:02.935 "nvme_io": false, 00:21:02.935 "nvme_io_md": false, 00:21:02.935 "write_zeroes": true, 00:21:02.935 "zcopy": false, 00:21:02.936 "get_zone_info": false, 00:21:02.936 "zone_management": false, 00:21:02.936 "zone_append": false, 00:21:02.936 "compare": false, 00:21:02.936 "compare_and_write": false, 00:21:02.936 "abort": false, 00:21:02.936 "seek_hole": true, 00:21:02.936 "seek_data": true, 00:21:02.936 "copy": false, 00:21:02.936 "nvme_iov_md": false 00:21:02.936 }, 00:21:02.936 "driver_specific": { 00:21:02.936 "lvol": { 00:21:02.936 "lvol_store_uuid": "eca75305-0086-4562-a699-0d2cab7ebed6", 00:21:02.936 "base_bdev": "nvme0n1", 00:21:02.936 "thin_provision": true, 00:21:02.936 "num_allocated_clusters": 0, 00:21:02.936 "snapshot": false, 00:21:02.936 "clone": false, 00:21:02.936 "esnap_clone": false 00:21:02.936 } 00:21:02.936 } 00:21:02.936 } 00:21:02.936 ]' 00:21:02.936 12:31:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:02.936 12:31:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:02.936 12:31:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:02.936 12:31:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:02.936 12:31:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:02.936 12:31:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:02.936 12:31:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:21:02.936 12:31:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ec09bb56-3254-41bb-8837-e0f04b013f80 -c nvc0n1p0 --l2p_dram_limit 20 00:21:03.195 [2024-10-07 12:31:26.259372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.195 [2024-10-07 12:31:26.259627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:03.195 [2024-10-07 12:31:26.259647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:03.195 [2024-10-07 12:31:26.259661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.195 [2024-10-07 12:31:26.259727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.195 [2024-10-07 12:31:26.259742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.195 [2024-10-07 12:31:26.259755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:03.196 [2024-10-07 12:31:26.259767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.196 [2024-10-07 12:31:26.259788] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:03.196 [2024-10-07 12:31:26.260811] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:03.196 [2024-10-07 12:31:26.260844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.196 [2024-10-07 12:31:26.260858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.196 [2024-10-07 12:31:26.260869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:21:03.196 [2024-10-07 12:31:26.260882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.196 [2024-10-07 12:31:26.260969] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 741eeee1-2e45-41be-b2a7-62d4c26705f8 00:21:03.196 [2024-10-07 12:31:26.263385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.196 [2024-10-07 12:31:26.263421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:03.196 [2024-10-07 12:31:26.263438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:03.196 [2024-10-07 12:31:26.263449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.196 [2024-10-07 12:31:26.277131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.196 [2024-10-07 12:31:26.277160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.196 [2024-10-07 12:31:26.277176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.641 ms 00:21:03.196 [2024-10-07 12:31:26.277186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.196 [2024-10-07 12:31:26.277284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.196 [2024-10-07 12:31:26.277298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.196 [2024-10-07 12:31:26.277317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:03.196 [2024-10-07 12:31:26.277327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.196 [2024-10-07 12:31:26.277384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.196 [2024-10-07 12:31:26.277395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:03.196 [2024-10-07 12:31:26.277413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:03.196 [2024-10-07 12:31:26.277422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.196 [2024-10-07 12:31:26.277450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:03.196 [2024-10-07 12:31:26.283434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.196 [2024-10-07 12:31:26.283474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.196 [2024-10-07 12:31:26.283487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.005 ms 00:21:03.196 [2024-10-07 12:31:26.283501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.196 [2024-10-07 12:31:26.283534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.196 [2024-10-07 12:31:26.283547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:03.196 [2024-10-07 12:31:26.283558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:03.196 [2024-10-07 12:31:26.283571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.196 [2024-10-07 12:31:26.283603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:03.196 [2024-10-07 12:31:26.283748] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:03.196 [2024-10-07 12:31:26.283764] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:03.196 [2024-10-07 12:31:26.283781] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:03.196 [2024-10-07 12:31:26.283794] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:03.196 [2024-10-07 12:31:26.283810] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:03.196 [2024-10-07 12:31:26.283822] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:03.196 [2024-10-07 12:31:26.283839] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:03.196 [2024-10-07 12:31:26.283849] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:03.196 [2024-10-07 12:31:26.283862] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:03.196 [2024-10-07 12:31:26.283872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.196 [2024-10-07 12:31:26.283885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:03.196 [2024-10-07 12:31:26.283896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:21:03.196 [2024-10-07 12:31:26.283922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.196 [2024-10-07 12:31:26.283992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.196 [2024-10-07 12:31:26.284009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:03.196 [2024-10-07 12:31:26.284036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:03.196 [2024-10-07 12:31:26.284055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.196 [2024-10-07 12:31:26.284136] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:03.196 [2024-10-07 12:31:26.284152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:03.196 [2024-10-07 12:31:26.284163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:03.196 [2024-10-07 12:31:26.284177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:03.196 [2024-10-07 12:31:26.284202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:03.196 [2024-10-07 12:31:26.284224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:03.196 [2024-10-07 12:31:26.284233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:03.196 [2024-10-07 12:31:26.284256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:03.196 [2024-10-07 12:31:26.284281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:03.196 [2024-10-07 12:31:26.284292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:03.196 [2024-10-07 12:31:26.284310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:03.196 [2024-10-07 12:31:26.284320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:03.196 [2024-10-07 12:31:26.284335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:03.196 [2024-10-07 12:31:26.284358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:03.196 [2024-10-07 12:31:26.284367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:03.196 [2024-10-07 12:31:26.284391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.196 [2024-10-07 12:31:26.284413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:03.196 [2024-10-07 12:31:26.284425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.196 [2024-10-07 12:31:26.284447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:03.196 [2024-10-07 12:31:26.284456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.196 [2024-10-07 12:31:26.284478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:03.196 [2024-10-07 12:31:26.284490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.196 [2024-10-07 12:31:26.284514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:03.196 [2024-10-07 12:31:26.284523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:03.196 [2024-10-07 12:31:26.284544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:03.196 [2024-10-07 12:31:26.284557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:03.196 [2024-10-07 12:31:26.284566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:03.196 [2024-10-07 12:31:26.284578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:03.196 [2024-10-07 12:31:26.284588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:03.196 [2024-10-07 12:31:26.284600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:03.196 [2024-10-07 12:31:26.284621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:03.196 [2024-10-07 12:31:26.284629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284641] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:03.196 [2024-10-07 12:31:26.284652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:03.196 [2024-10-07 12:31:26.284668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:03.196 [2024-10-07 12:31:26.284678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.196 [2024-10-07 12:31:26.284698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:03.196 [2024-10-07 12:31:26.284707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:03.196 [2024-10-07 12:31:26.284721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:03.196 [2024-10-07 12:31:26.284731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:03.196 [2024-10-07 12:31:26.284744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:03.196 [2024-10-07 12:31:26.284754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:03.196 [2024-10-07 12:31:26.284772] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:03.196 [2024-10-07 12:31:26.284787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:03.196 [2024-10-07 12:31:26.284803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:03.196 [2024-10-07 12:31:26.284814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:03.196 [2024-10-07 12:31:26.284827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:03.196 [2024-10-07 12:31:26.284838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:03.196 [2024-10-07 12:31:26.284852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:03.197 [2024-10-07 12:31:26.284862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:03.197 [2024-10-07 12:31:26.284875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:03.197 [2024-10-07 12:31:26.284886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:03.197 [2024-10-07 12:31:26.284903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:03.197 [2024-10-07 12:31:26.284925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:03.197 [2024-10-07 12:31:26.284939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:03.197 [2024-10-07 12:31:26.284950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:03.197 [2024-10-07 12:31:26.284964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:03.197 [2024-10-07 12:31:26.284975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:03.197 [2024-10-07 12:31:26.284995] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:03.197 [2024-10-07 12:31:26.285007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:03.197 [2024-10-07 12:31:26.285021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:03.197 [2024-10-07 12:31:26.285032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:03.197 [2024-10-07 12:31:26.285045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:03.197 [2024-10-07 12:31:26.285056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:03.197 [2024-10-07 12:31:26.285071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.197 [2024-10-07 12:31:26.285083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:03.197 [2024-10-07 12:31:26.285099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:21:03.197 [2024-10-07 12:31:26.285109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.197 [2024-10-07 12:31:26.285152] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:03.197 [2024-10-07 12:31:26.285165] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:07.391 [2024-10-07 12:31:30.203149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.203231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:07.391 [2024-10-07 12:31:30.203255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3924.349 ms 00:21:07.391 [2024-10-07 12:31:30.203267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.266322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.266387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.391 [2024-10-07 12:31:30.266409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.747 ms 00:21:07.391 [2024-10-07 12:31:30.266422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.266616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.266631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:07.391 [2024-10-07 12:31:30.266654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:07.391 [2024-10-07 12:31:30.266665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.318728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.318780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.391 [2024-10-07 12:31:30.318806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.099 ms 00:21:07.391 [2024-10-07 12:31:30.318816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.318880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.318891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.391 [2024-10-07 12:31:30.318916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:07.391 [2024-10-07 12:31:30.318926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.319786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.319814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.391 [2024-10-07 12:31:30.319830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:21:07.391 [2024-10-07 12:31:30.319841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.319990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.320008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.391 [2024-10-07 12:31:30.320026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:21:07.391 [2024-10-07 12:31:30.320036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.341146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.341195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.391 [2024-10-07 12:31:30.341214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.118 ms 00:21:07.391 [2024-10-07 12:31:30.341225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.355751] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:07.391 [2024-10-07 12:31:30.365207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.365266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:07.391 [2024-10-07 12:31:30.365283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.891 ms 00:21:07.391 [2024-10-07 12:31:30.365298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.465304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.465386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:07.391 [2024-10-07 12:31:30.465406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.106 ms 00:21:07.391 [2024-10-07 12:31:30.465420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.465641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.465663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:07.391 [2024-10-07 12:31:30.465676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:21:07.391 [2024-10-07 12:31:30.465689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.502370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.391 [2024-10-07 12:31:30.502432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:07.391 [2024-10-07 12:31:30.502450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.683 ms 00:21:07.391 [2024-10-07 12:31:30.502468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.391 [2024-10-07 12:31:30.538747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.392 [2024-10-07 12:31:30.539042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:07.392 [2024-10-07 12:31:30.539070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.291 ms 00:21:07.392 [2024-10-07 12:31:30.539087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.392 [2024-10-07 12:31:30.539866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.392 [2024-10-07 12:31:30.539893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:07.392 [2024-10-07 12:31:30.539924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:21:07.392 [2024-10-07 12:31:30.539938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.392 [2024-10-07 12:31:30.646721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.392 [2024-10-07 12:31:30.646808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:07.392 [2024-10-07 12:31:30.646827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.884 ms 00:21:07.392 [2024-10-07 12:31:30.646841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.651 [2024-10-07 12:31:30.686659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.651 [2024-10-07 12:31:30.686735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:07.651 [2024-10-07 12:31:30.686754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.747 ms 00:21:07.651 [2024-10-07 12:31:30.686770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.651 [2024-10-07 12:31:30.725164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.651 [2024-10-07 12:31:30.725234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:07.651 [2024-10-07 12:31:30.725251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.400 ms 00:21:07.651 [2024-10-07 12:31:30.725265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.651 [2024-10-07 12:31:30.760856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.651 [2024-10-07 12:31:30.760934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:07.651 [2024-10-07 12:31:30.760952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.596 ms 00:21:07.651 [2024-10-07 12:31:30.760966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.651 [2024-10-07 12:31:30.761015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.651 [2024-10-07 12:31:30.761034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:07.651 [2024-10-07 12:31:30.761046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:07.651 [2024-10-07 12:31:30.761061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.651 [2024-10-07 12:31:30.761178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.651 [2024-10-07 12:31:30.761197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:07.651 [2024-10-07 12:31:30.761208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:07.651 [2024-10-07 12:31:30.761222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.651 [2024-10-07 12:31:30.762699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4510.097 ms, result 0 00:21:07.651 { 00:21:07.652 "name": "ftl0", 00:21:07.652 "uuid": "741eeee1-2e45-41be-b2a7-62d4c26705f8" 00:21:07.652 } 00:21:07.652 12:31:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:07.652 12:31:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:21:07.652 12:31:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:21:07.911 12:31:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:07.911 [2024-10-07 12:31:31.094334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:07.911 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:07.911 Zero copy mechanism will not be used. 00:21:07.911 Running I/O for 4 seconds... 00:21:10.228 1336.00 IOPS, 88.72 MiB/s [2024-10-07T12:31:34.457Z] 1359.50 IOPS, 90.28 MiB/s [2024-10-07T12:31:35.396Z] 1381.33 IOPS, 91.73 MiB/s [2024-10-07T12:31:35.396Z] 1401.50 IOPS, 93.07 MiB/s 00:21:12.105 Latency(us) 00:21:12.105 [2024-10-07T12:31:35.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.105 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:12.105 ftl0 : 4.00 1401.10 93.04 0.00 0.00 752.05 284.58 9633.00 00:21:12.105 [2024-10-07T12:31:35.396Z] =================================================================================================================== 00:21:12.105 [2024-10-07T12:31:35.396Z] Total : 1401.10 93.04 0.00 0.00 752.05 284.58 9633.00 00:21:12.105 [2024-10-07 12:31:35.100397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:12.105 { 00:21:12.105 "results": [ 00:21:12.105 { 00:21:12.105 "job": "ftl0", 00:21:12.105 "core_mask": "0x1", 00:21:12.105 "workload": "randwrite", 00:21:12.105 "status": "finished", 00:21:12.105 "queue_depth": 1, 00:21:12.105 "io_size": 69632, 00:21:12.105 "runtime": 4.001861, 00:21:12.105 "iops": 1401.098139090788, 00:21:12.105 "mibps": 93.04167329899764, 00:21:12.105 "io_failed": 0, 00:21:12.105 "io_timeout": 0, 00:21:12.105 "avg_latency_us": 752.0467092554272, 00:21:12.105 "min_latency_us": 284.58152610441766, 00:21:12.105 "max_latency_us": 9633.002409638555 00:21:12.105 } 00:21:12.105 ], 00:21:12.105 "core_count": 1 00:21:12.105 } 00:21:12.105 12:31:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:12.105 [2024-10-07 12:31:35.217363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:12.105 Running I/O for 4 seconds... 00:21:14.016 10583.00 IOPS, 41.34 MiB/s [2024-10-07T12:31:38.245Z] 10364.50 IOPS, 40.49 MiB/s [2024-10-07T12:31:39.622Z] 10734.33 IOPS, 41.93 MiB/s [2024-10-07T12:31:39.622Z] 10905.00 IOPS, 42.60 MiB/s 00:21:16.331 Latency(us) 00:21:16.331 [2024-10-07T12:31:39.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.331 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:16.331 ftl0 : 4.02 10894.64 42.56 0.00 0.00 11725.09 227.01 34531.42 00:21:16.331 [2024-10-07T12:31:39.622Z] =================================================================================================================== 00:21:16.331 [2024-10-07T12:31:39.622Z] Total : 10894.64 42.56 0.00 0.00 11725.09 0.00 34531.42 00:21:16.331 { 00:21:16.331 "results": [ 00:21:16.331 { 00:21:16.331 "job": "ftl0", 00:21:16.331 "core_mask": "0x1", 00:21:16.331 "workload": "randwrite", 00:21:16.331 "status": "finished", 00:21:16.331 "queue_depth": 128, 00:21:16.331 "io_size": 4096, 00:21:16.331 "runtime": 4.015368, 00:21:16.331 "iops": 10894.642782429904, 00:21:16.331 "mibps": 42.557198368866814, 00:21:16.331 "io_failed": 0, 00:21:16.331 "io_timeout": 0, 00:21:16.331 "avg_latency_us": 11725.085773239714, 00:21:16.331 "min_latency_us": 227.00722891566264, 00:21:16.331 "max_latency_us": 34531.41847389558 00:21:16.331 } 00:21:16.331 ], 00:21:16.331 "core_count": 1 00:21:16.332 } 00:21:16.332 [2024-10-07 12:31:39.236303] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:16.332 12:31:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:16.332 [2024-10-07 12:31:39.360611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:16.332 Running I/O for 4 seconds... 00:21:18.207 8874.00 IOPS, 34.66 MiB/s [2024-10-07T12:31:42.434Z] 8610.50 IOPS, 33.63 MiB/s [2024-10-07T12:31:43.373Z] 8625.33 IOPS, 33.69 MiB/s [2024-10-07T12:31:43.373Z] 8758.75 IOPS, 34.21 MiB/s 00:21:20.082 Latency(us) 00:21:20.082 [2024-10-07T12:31:43.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.082 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:20.082 Verification LBA range: start 0x0 length 0x1400000 00:21:20.082 ftl0 : 4.01 8769.90 34.26 0.00 0.00 14552.04 250.04 32846.96 00:21:20.082 [2024-10-07T12:31:43.373Z] =================================================================================================================== 00:21:20.082 [2024-10-07T12:31:43.373Z] Total : 8769.90 34.26 0.00 0.00 14552.04 0.00 32846.96 00:21:20.341 { 00:21:20.341 "results": [ 00:21:20.341 { 00:21:20.341 "job": "ftl0", 00:21:20.341 "core_mask": "0x1", 00:21:20.341 "workload": "verify", 00:21:20.341 "status": "finished", 00:21:20.341 "verify_range": { 00:21:20.341 "start": 0, 00:21:20.341 "length": 20971520 00:21:20.341 }, 00:21:20.341 "queue_depth": 128, 00:21:20.341 "io_size": 4096, 00:21:20.341 "runtime": 4.009395, 00:21:20.341 "iops": 8769.901693397633, 00:21:20.341 "mibps": 34.2574284898345, 00:21:20.341 "io_failed": 0, 00:21:20.341 "io_timeout": 0, 00:21:20.341 "avg_latency_us": 14552.036314120598, 00:21:20.341 "min_latency_us": 250.03694779116466, 00:21:20.341 "max_latency_us": 32846.95903614458 00:21:20.341 } 00:21:20.341 ], 00:21:20.341 "core_count": 1 00:21:20.341 } 00:21:20.341 [2024-10-07 12:31:43.383109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:20.341 12:31:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:20.341 [2024-10-07 12:31:43.590218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.341 [2024-10-07 12:31:43.590270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:20.341 [2024-10-07 12:31:43.590286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:20.341 [2024-10-07 12:31:43.590301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.341 [2024-10-07 12:31:43.590325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:20.341 [2024-10-07 12:31:43.595137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.341 [2024-10-07 12:31:43.595171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:20.341 [2024-10-07 12:31:43.595187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.796 ms 00:21:20.341 [2024-10-07 12:31:43.595202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.341 [2024-10-07 12:31:43.597253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.341 [2024-10-07 12:31:43.597293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:20.341 [2024-10-07 12:31:43.597310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:21:20.341 [2024-10-07 12:31:43.597321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.601 [2024-10-07 12:31:43.794849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.601 [2024-10-07 12:31:43.794895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:20.601 [2024-10-07 12:31:43.794930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 197.823 ms 00:21:20.601 [2024-10-07 12:31:43.794941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.601 [2024-10-07 12:31:43.799832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.601 [2024-10-07 12:31:43.799866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:20.601 [2024-10-07 12:31:43.799882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.847 ms 00:21:20.601 [2024-10-07 12:31:43.799892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.601 [2024-10-07 12:31:43.835115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.601 [2024-10-07 12:31:43.835151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:20.601 [2024-10-07 12:31:43.835169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.212 ms 00:21:20.601 [2024-10-07 12:31:43.835179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.601 [2024-10-07 12:31:43.857129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.601 [2024-10-07 12:31:43.857166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:20.601 [2024-10-07 12:31:43.857183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.941 ms 00:21:20.601 [2024-10-07 12:31:43.857193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.601 [2024-10-07 12:31:43.857338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.601 [2024-10-07 12:31:43.857351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:20.601 [2024-10-07 12:31:43.857372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:21:20.601 [2024-10-07 12:31:43.857382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.862 [2024-10-07 12:31:43.892866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.862 [2024-10-07 12:31:43.892910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:20.862 [2024-10-07 12:31:43.892928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.520 ms 00:21:20.862 [2024-10-07 12:31:43.892938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.862 [2024-10-07 12:31:43.928091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.862 [2024-10-07 12:31:43.928136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:20.862 [2024-10-07 12:31:43.928152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.166 ms 00:21:20.862 [2024-10-07 12:31:43.928162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.862 [2024-10-07 12:31:43.961734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.862 [2024-10-07 12:31:43.961767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:20.862 [2024-10-07 12:31:43.961784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.585 ms 00:21:20.862 [2024-10-07 12:31:43.961793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.862 [2024-10-07 12:31:43.995239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.862 [2024-10-07 12:31:43.995437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:20.862 [2024-10-07 12:31:43.995467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.410 ms 00:21:20.862 [2024-10-07 12:31:43.995477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.862 [2024-10-07 12:31:43.995516] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:20.862 [2024-10-07 12:31:43.995534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:20.862 [2024-10-07 12:31:43.995830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.995990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:20.863 [2024-10-07 12:31:43.996784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:20.864 [2024-10-07 12:31:43.996797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:20.864 [2024-10-07 12:31:43.996807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:20.864 [2024-10-07 12:31:43.996820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:20.864 [2024-10-07 12:31:43.996838] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:20.864 [2024-10-07 12:31:43.996852] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 741eeee1-2e45-41be-b2a7-62d4c26705f8 00:21:20.864 [2024-10-07 12:31:43.996863] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:20.864 [2024-10-07 12:31:43.996876] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:20.864 [2024-10-07 12:31:43.996886] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:20.864 [2024-10-07 12:31:43.996898] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:20.864 [2024-10-07 12:31:43.996920] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:20.864 [2024-10-07 12:31:43.996934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:20.864 [2024-10-07 12:31:43.996944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:20.864 [2024-10-07 12:31:43.996959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:20.864 [2024-10-07 12:31:43.996968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:20.864 [2024-10-07 12:31:43.996981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.864 [2024-10-07 12:31:43.996995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:20.864 [2024-10-07 12:31:43.997008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:21:20.864 [2024-10-07 12:31:43.997018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.864 [2024-10-07 12:31:44.016130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.864 [2024-10-07 12:31:44.016162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:20.864 [2024-10-07 12:31:44.016177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.090 ms 00:21:20.864 [2024-10-07 12:31:44.016187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.864 [2024-10-07 12:31:44.016718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.864 [2024-10-07 12:31:44.016733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:20.864 [2024-10-07 12:31:44.016747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:21:20.864 [2024-10-07 12:31:44.016756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.864 [2024-10-07 12:31:44.065004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.864 [2024-10-07 12:31:44.065037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:20.864 [2024-10-07 12:31:44.065056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.864 [2024-10-07 12:31:44.065067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.864 [2024-10-07 12:31:44.065136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.864 [2024-10-07 12:31:44.065147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:20.864 [2024-10-07 12:31:44.065160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.864 [2024-10-07 12:31:44.065170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.864 [2024-10-07 12:31:44.065263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.864 [2024-10-07 12:31:44.065277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:20.864 [2024-10-07 12:31:44.065291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.864 [2024-10-07 12:31:44.065300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.864 [2024-10-07 12:31:44.065321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:20.864 [2024-10-07 12:31:44.065335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:20.864 [2024-10-07 12:31:44.065347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:20.864 [2024-10-07 12:31:44.065357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.125 [2024-10-07 12:31:44.191279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.125 [2024-10-07 12:31:44.191522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:21.125 [2024-10-07 12:31:44.191558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.125 [2024-10-07 12:31:44.191569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.125 [2024-10-07 12:31:44.292412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.125 [2024-10-07 12:31:44.292470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:21.125 [2024-10-07 12:31:44.292490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.125 [2024-10-07 12:31:44.292502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.125 [2024-10-07 12:31:44.292657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.125 [2024-10-07 12:31:44.292671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:21.125 [2024-10-07 12:31:44.292685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.125 [2024-10-07 12:31:44.292695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.125 [2024-10-07 12:31:44.292756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.125 [2024-10-07 12:31:44.292768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:21.125 [2024-10-07 12:31:44.292786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.125 [2024-10-07 12:31:44.292797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.125 [2024-10-07 12:31:44.292945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.125 [2024-10-07 12:31:44.292960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:21.125 [2024-10-07 12:31:44.292978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.125 [2024-10-07 12:31:44.292988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.125 [2024-10-07 12:31:44.293034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.125 [2024-10-07 12:31:44.293047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:21.125 [2024-10-07 12:31:44.293061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.125 [2024-10-07 12:31:44.293074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.125 [2024-10-07 12:31:44.293123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.125 [2024-10-07 12:31:44.293135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:21.125 [2024-10-07 12:31:44.293148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.125 [2024-10-07 12:31:44.293158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.125 [2024-10-07 12:31:44.293213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.125 [2024-10-07 12:31:44.293226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:21.125 [2024-10-07 12:31:44.293243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.125 [2024-10-07 12:31:44.293254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.125 [2024-10-07 12:31:44.293414] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 704.274 ms, result 0 00:21:21.125 true 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75458 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 75458 ']' 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 75458 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75458 00:21:21.125 killing process with pid 75458 00:21:21.125 Received shutdown signal, test time was about 4.000000 seconds 00:21:21.125 00:21:21.125 Latency(us) 00:21:21.125 [2024-10-07T12:31:44.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.125 [2024-10-07T12:31:44.416Z] =================================================================================================================== 00:21:21.125 [2024-10-07T12:31:44.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75458' 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 75458 00:21:21.125 12:31:44 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 75458 00:21:23.034 12:31:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:23.034 Remove shared memory files 00:21:23.034 12:31:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:21:23.034 12:31:46 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:23.034 12:31:46 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:23.034 12:31:46 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:23.034 12:31:46 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:23.034 12:31:46 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:23.034 12:31:46 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:23.034 ************************************ 00:21:23.034 END TEST ftl_bdevperf 00:21:23.034 ************************************ 00:21:23.034 00:21:23.034 real 0m23.966s 00:21:23.034 user 0m26.334s 00:21:23.034 sys 0m1.408s 00:21:23.034 12:31:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:23.034 12:31:46 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:23.034 12:31:46 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:23.034 12:31:46 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:23.034 12:31:46 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:23.034 12:31:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:23.034 ************************************ 00:21:23.034 START TEST ftl_trim 00:21:23.034 ************************************ 00:21:23.034 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:23.034 * Looking for test storage... 00:21:23.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:23.035 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:23.035 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:21:23.035 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:23.035 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.035 12:31:46 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:21:23.294 12:31:46 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.294 12:31:46 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:21:23.294 12:31:46 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:21:23.294 12:31:46 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.294 12:31:46 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:21:23.294 12:31:46 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:21:23.294 12:31:46 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.294 12:31:46 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.294 12:31:46 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:21:23.294 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.294 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:23.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.294 --rc genhtml_branch_coverage=1 00:21:23.294 --rc genhtml_function_coverage=1 00:21:23.294 --rc genhtml_legend=1 00:21:23.294 --rc geninfo_all_blocks=1 00:21:23.294 --rc geninfo_unexecuted_blocks=1 00:21:23.294 00:21:23.294 ' 00:21:23.294 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:23.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.294 --rc genhtml_branch_coverage=1 00:21:23.294 --rc genhtml_function_coverage=1 00:21:23.294 --rc genhtml_legend=1 00:21:23.294 --rc geninfo_all_blocks=1 00:21:23.294 --rc geninfo_unexecuted_blocks=1 00:21:23.294 00:21:23.294 ' 00:21:23.294 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:23.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.294 --rc genhtml_branch_coverage=1 00:21:23.294 --rc genhtml_function_coverage=1 00:21:23.294 --rc genhtml_legend=1 00:21:23.294 --rc geninfo_all_blocks=1 00:21:23.294 --rc geninfo_unexecuted_blocks=1 00:21:23.294 00:21:23.294 ' 00:21:23.294 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:23.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.294 --rc genhtml_branch_coverage=1 00:21:23.294 --rc genhtml_function_coverage=1 00:21:23.294 --rc genhtml_legend=1 00:21:23.294 --rc geninfo_all_blocks=1 00:21:23.294 --rc geninfo_unexecuted_blocks=1 00:21:23.294 00:21:23.294 ' 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:23.294 12:31:46 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:23.295 12:31:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:23.295 12:31:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:23.295 12:31:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:23.295 12:31:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:23.295 12:31:46 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:23.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.295 12:31:46 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75823 00:21:23.295 12:31:46 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75823 00:21:23.295 12:31:46 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:23.295 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 75823 ']' 00:21:23.295 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.295 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.295 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.295 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.295 12:31:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:23.295 [2024-10-07 12:31:46.493996] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:21:23.295 [2024-10-07 12:31:46.494375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75823 ] 00:21:23.554 [2024-10-07 12:31:46.674163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:23.813 [2024-10-07 12:31:46.951059] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.813 [2024-10-07 12:31:46.951189] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.813 [2024-10-07 12:31:46.951216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.751 12:31:47 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.751 12:31:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:21:24.752 12:31:47 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:24.752 12:31:47 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:24.752 12:31:47 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:24.752 12:31:47 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:24.752 12:31:47 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:24.752 12:31:47 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:25.011 12:31:48 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:25.011 12:31:48 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:25.011 12:31:48 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:25.011 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:25.011 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:25.011 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:25.011 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:25.011 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:25.269 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:25.269 { 00:21:25.269 "name": "nvme0n1", 00:21:25.269 "aliases": [ 00:21:25.269 "0f1b06c9-2f68-44a2-a172-4f19afd9d77f" 00:21:25.269 ], 00:21:25.269 "product_name": "NVMe disk", 00:21:25.269 "block_size": 4096, 00:21:25.269 "num_blocks": 1310720, 00:21:25.269 "uuid": "0f1b06c9-2f68-44a2-a172-4f19afd9d77f", 00:21:25.269 "numa_id": -1, 00:21:25.269 "assigned_rate_limits": { 00:21:25.269 "rw_ios_per_sec": 0, 00:21:25.269 "rw_mbytes_per_sec": 0, 00:21:25.269 "r_mbytes_per_sec": 0, 00:21:25.270 "w_mbytes_per_sec": 0 00:21:25.270 }, 00:21:25.270 "claimed": true, 00:21:25.270 "claim_type": "read_many_write_one", 00:21:25.270 "zoned": false, 00:21:25.270 "supported_io_types": { 00:21:25.270 "read": true, 00:21:25.270 "write": true, 00:21:25.270 "unmap": true, 00:21:25.270 "flush": true, 00:21:25.270 "reset": true, 00:21:25.270 "nvme_admin": true, 00:21:25.270 "nvme_io": true, 00:21:25.270 "nvme_io_md": false, 00:21:25.270 "write_zeroes": true, 00:21:25.270 "zcopy": false, 00:21:25.270 "get_zone_info": false, 00:21:25.270 "zone_management": false, 00:21:25.270 "zone_append": false, 00:21:25.270 "compare": true, 00:21:25.270 "compare_and_write": false, 00:21:25.270 "abort": true, 00:21:25.270 "seek_hole": false, 00:21:25.270 "seek_data": false, 00:21:25.270 "copy": true, 00:21:25.270 "nvme_iov_md": false 00:21:25.270 }, 00:21:25.270 "driver_specific": { 00:21:25.270 "nvme": [ 00:21:25.270 { 00:21:25.270 "pci_address": "0000:00:11.0", 00:21:25.270 "trid": { 00:21:25.270 "trtype": "PCIe", 00:21:25.270 "traddr": "0000:00:11.0" 00:21:25.270 }, 00:21:25.270 "ctrlr_data": { 00:21:25.270 "cntlid": 0, 00:21:25.270 "vendor_id": "0x1b36", 00:21:25.270 "model_number": "QEMU NVMe Ctrl", 00:21:25.270 "serial_number": "12341", 00:21:25.270 "firmware_revision": "8.0.0", 00:21:25.270 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:25.270 "oacs": { 00:21:25.270 "security": 0, 00:21:25.270 "format": 1, 00:21:25.270 "firmware": 0, 00:21:25.270 "ns_manage": 1 00:21:25.270 }, 00:21:25.270 "multi_ctrlr": false, 00:21:25.270 "ana_reporting": false 00:21:25.270 }, 00:21:25.270 "vs": { 00:21:25.270 "nvme_version": "1.4" 00:21:25.270 }, 00:21:25.270 "ns_data": { 00:21:25.270 "id": 1, 00:21:25.270 "can_share": false 00:21:25.270 } 00:21:25.270 } 00:21:25.270 ], 00:21:25.270 "mp_policy": "active_passive" 00:21:25.270 } 00:21:25.270 } 00:21:25.270 ]' 00:21:25.270 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:25.270 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:25.270 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:25.270 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:25.270 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:25.270 12:31:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:21:25.270 12:31:48 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:25.270 12:31:48 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:25.270 12:31:48 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:25.270 12:31:48 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:25.270 12:31:48 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:25.529 12:31:48 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=eca75305-0086-4562-a699-0d2cab7ebed6 00:21:25.529 12:31:48 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:25.529 12:31:48 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eca75305-0086-4562-a699-0d2cab7ebed6 00:21:25.789 12:31:48 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:25.789 12:31:49 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=1e97e3a4-9cb2-4c79-9a40-2ec513249572 00:21:25.789 12:31:49 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1e97e3a4-9cb2-4c79-9a40-2ec513249572 00:21:26.048 12:31:49 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:26.048 12:31:49 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:26.048 12:31:49 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:26.048 12:31:49 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:26.048 12:31:49 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:26.048 12:31:49 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:26.048 12:31:49 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:26.048 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:26.048 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:26.048 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:26.048 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:26.048 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:26.308 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:26.308 { 00:21:26.308 "name": "945f45b3-6a8e-4837-99ad-6d12f23e163c", 00:21:26.308 "aliases": [ 00:21:26.308 "lvs/nvme0n1p0" 00:21:26.308 ], 00:21:26.308 "product_name": "Logical Volume", 00:21:26.308 "block_size": 4096, 00:21:26.308 "num_blocks": 26476544, 00:21:26.308 "uuid": "945f45b3-6a8e-4837-99ad-6d12f23e163c", 00:21:26.308 "assigned_rate_limits": { 00:21:26.308 "rw_ios_per_sec": 0, 00:21:26.308 "rw_mbytes_per_sec": 0, 00:21:26.308 "r_mbytes_per_sec": 0, 00:21:26.308 "w_mbytes_per_sec": 0 00:21:26.308 }, 00:21:26.308 "claimed": false, 00:21:26.308 "zoned": false, 00:21:26.308 "supported_io_types": { 00:21:26.308 "read": true, 00:21:26.308 "write": true, 00:21:26.308 "unmap": true, 00:21:26.308 "flush": false, 00:21:26.308 "reset": true, 00:21:26.308 "nvme_admin": false, 00:21:26.308 "nvme_io": false, 00:21:26.308 "nvme_io_md": false, 00:21:26.308 "write_zeroes": true, 00:21:26.308 "zcopy": false, 00:21:26.308 "get_zone_info": false, 00:21:26.308 "zone_management": false, 00:21:26.308 "zone_append": false, 00:21:26.308 "compare": false, 00:21:26.308 "compare_and_write": false, 00:21:26.308 "abort": false, 00:21:26.308 "seek_hole": true, 00:21:26.308 "seek_data": true, 00:21:26.308 "copy": false, 00:21:26.308 "nvme_iov_md": false 00:21:26.308 }, 00:21:26.308 "driver_specific": { 00:21:26.308 "lvol": { 00:21:26.308 "lvol_store_uuid": "1e97e3a4-9cb2-4c79-9a40-2ec513249572", 00:21:26.308 "base_bdev": "nvme0n1", 00:21:26.308 "thin_provision": true, 00:21:26.308 "num_allocated_clusters": 0, 00:21:26.308 "snapshot": false, 00:21:26.308 "clone": false, 00:21:26.308 "esnap_clone": false 00:21:26.308 } 00:21:26.308 } 00:21:26.308 } 00:21:26.308 ]' 00:21:26.308 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:26.308 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:26.308 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:26.308 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:26.308 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:26.308 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:26.308 12:31:49 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:26.308 12:31:49 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:26.308 12:31:49 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:26.568 12:31:49 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:26.568 12:31:49 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:26.568 12:31:49 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:26.568 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:26.568 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:26.568 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:26.568 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:26.568 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:26.827 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:26.827 { 00:21:26.827 "name": "945f45b3-6a8e-4837-99ad-6d12f23e163c", 00:21:26.827 "aliases": [ 00:21:26.827 "lvs/nvme0n1p0" 00:21:26.827 ], 00:21:26.827 "product_name": "Logical Volume", 00:21:26.827 "block_size": 4096, 00:21:26.827 "num_blocks": 26476544, 00:21:26.827 "uuid": "945f45b3-6a8e-4837-99ad-6d12f23e163c", 00:21:26.827 "assigned_rate_limits": { 00:21:26.827 "rw_ios_per_sec": 0, 00:21:26.827 "rw_mbytes_per_sec": 0, 00:21:26.827 "r_mbytes_per_sec": 0, 00:21:26.827 "w_mbytes_per_sec": 0 00:21:26.827 }, 00:21:26.827 "claimed": false, 00:21:26.827 "zoned": false, 00:21:26.827 "supported_io_types": { 00:21:26.827 "read": true, 00:21:26.827 "write": true, 00:21:26.827 "unmap": true, 00:21:26.827 "flush": false, 00:21:26.827 "reset": true, 00:21:26.827 "nvme_admin": false, 00:21:26.827 "nvme_io": false, 00:21:26.827 "nvme_io_md": false, 00:21:26.827 "write_zeroes": true, 00:21:26.827 "zcopy": false, 00:21:26.827 "get_zone_info": false, 00:21:26.827 "zone_management": false, 00:21:26.827 "zone_append": false, 00:21:26.827 "compare": false, 00:21:26.827 "compare_and_write": false, 00:21:26.827 "abort": false, 00:21:26.827 "seek_hole": true, 00:21:26.827 "seek_data": true, 00:21:26.827 "copy": false, 00:21:26.827 "nvme_iov_md": false 00:21:26.827 }, 00:21:26.827 "driver_specific": { 00:21:26.827 "lvol": { 00:21:26.827 "lvol_store_uuid": "1e97e3a4-9cb2-4c79-9a40-2ec513249572", 00:21:26.827 "base_bdev": "nvme0n1", 00:21:26.827 "thin_provision": true, 00:21:26.827 "num_allocated_clusters": 0, 00:21:26.827 "snapshot": false, 00:21:26.827 "clone": false, 00:21:26.827 "esnap_clone": false 00:21:26.827 } 00:21:26.827 } 00:21:26.827 } 00:21:26.827 ]' 00:21:26.827 12:31:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:26.827 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:26.827 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:26.827 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:26.827 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:26.827 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:26.827 12:31:50 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:26.827 12:31:50 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:27.087 12:31:50 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:27.087 12:31:50 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:27.087 12:31:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:27.087 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:27.087 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:27.087 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:27.087 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:27.087 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 945f45b3-6a8e-4837-99ad-6d12f23e163c 00:21:27.346 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:27.346 { 00:21:27.346 "name": "945f45b3-6a8e-4837-99ad-6d12f23e163c", 00:21:27.346 "aliases": [ 00:21:27.346 "lvs/nvme0n1p0" 00:21:27.346 ], 00:21:27.346 "product_name": "Logical Volume", 00:21:27.346 "block_size": 4096, 00:21:27.346 "num_blocks": 26476544, 00:21:27.346 "uuid": "945f45b3-6a8e-4837-99ad-6d12f23e163c", 00:21:27.346 "assigned_rate_limits": { 00:21:27.346 "rw_ios_per_sec": 0, 00:21:27.346 "rw_mbytes_per_sec": 0, 00:21:27.346 "r_mbytes_per_sec": 0, 00:21:27.346 "w_mbytes_per_sec": 0 00:21:27.346 }, 00:21:27.346 "claimed": false, 00:21:27.346 "zoned": false, 00:21:27.346 "supported_io_types": { 00:21:27.346 "read": true, 00:21:27.346 "write": true, 00:21:27.346 "unmap": true, 00:21:27.346 "flush": false, 00:21:27.346 "reset": true, 00:21:27.346 "nvme_admin": false, 00:21:27.346 "nvme_io": false, 00:21:27.346 "nvme_io_md": false, 00:21:27.346 "write_zeroes": true, 00:21:27.346 "zcopy": false, 00:21:27.346 "get_zone_info": false, 00:21:27.346 "zone_management": false, 00:21:27.346 "zone_append": false, 00:21:27.346 "compare": false, 00:21:27.346 "compare_and_write": false, 00:21:27.346 "abort": false, 00:21:27.346 "seek_hole": true, 00:21:27.346 "seek_data": true, 00:21:27.346 "copy": false, 00:21:27.346 "nvme_iov_md": false 00:21:27.346 }, 00:21:27.346 "driver_specific": { 00:21:27.346 "lvol": { 00:21:27.346 "lvol_store_uuid": "1e97e3a4-9cb2-4c79-9a40-2ec513249572", 00:21:27.346 "base_bdev": "nvme0n1", 00:21:27.346 "thin_provision": true, 00:21:27.346 "num_allocated_clusters": 0, 00:21:27.346 "snapshot": false, 00:21:27.346 "clone": false, 00:21:27.346 "esnap_clone": false 00:21:27.346 } 00:21:27.346 } 00:21:27.346 } 00:21:27.346 ]' 00:21:27.346 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:27.346 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:27.346 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:27.346 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:27.346 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:27.346 12:31:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:27.346 12:31:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:27.347 12:31:50 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 945f45b3-6a8e-4837-99ad-6d12f23e163c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:27.607 [2024-10-07 12:31:50.794567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.794620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:27.607 [2024-10-07 12:31:50.794641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:27.607 [2024-10-07 12:31:50.794670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.607 [2024-10-07 12:31:50.797879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.797927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:27.607 [2024-10-07 12:31:50.797943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.171 ms 00:21:27.607 [2024-10-07 12:31:50.797954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.607 [2024-10-07 12:31:50.798078] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:27.607 [2024-10-07 12:31:50.799054] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:27.607 [2024-10-07 12:31:50.799091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.799103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:27.607 [2024-10-07 12:31:50.799116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:21:27.607 [2024-10-07 12:31:50.799129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.607 [2024-10-07 12:31:50.799241] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8095ccf6-8e51-4990-b00e-376620ab48a2 00:21:27.607 [2024-10-07 12:31:50.800650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.800687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:27.607 [2024-10-07 12:31:50.800699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:27.607 [2024-10-07 12:31:50.800712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.607 [2024-10-07 12:31:50.808128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.808165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:27.607 [2024-10-07 12:31:50.808192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.338 ms 00:21:27.607 [2024-10-07 12:31:50.808205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.607 [2024-10-07 12:31:50.808349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.808367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:27.607 [2024-10-07 12:31:50.808379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:27.607 [2024-10-07 12:31:50.808399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.607 [2024-10-07 12:31:50.808433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.808447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:27.607 [2024-10-07 12:31:50.808457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:27.607 [2024-10-07 12:31:50.808469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.607 [2024-10-07 12:31:50.808501] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:27.607 [2024-10-07 12:31:50.813722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.813757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:27.607 [2024-10-07 12:31:50.813772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.232 ms 00:21:27.607 [2024-10-07 12:31:50.813782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.607 [2024-10-07 12:31:50.813856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.813869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:27.607 [2024-10-07 12:31:50.813883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:27.607 [2024-10-07 12:31:50.813896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.607 [2024-10-07 12:31:50.813957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:27.607 [2024-10-07 12:31:50.814079] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:27.607 [2024-10-07 12:31:50.814099] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:27.607 [2024-10-07 12:31:50.814129] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:27.607 [2024-10-07 12:31:50.814148] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:27.607 [2024-10-07 12:31:50.814160] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:27.607 [2024-10-07 12:31:50.814174] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:27.607 [2024-10-07 12:31:50.814184] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:27.607 [2024-10-07 12:31:50.814195] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:27.607 [2024-10-07 12:31:50.814206] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:27.607 [2024-10-07 12:31:50.814219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.814230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:27.607 [2024-10-07 12:31:50.814243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:21:27.607 [2024-10-07 12:31:50.814253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.607 [2024-10-07 12:31:50.814342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.607 [2024-10-07 12:31:50.814357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:27.607 [2024-10-07 12:31:50.814369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:27.607 [2024-10-07 12:31:50.814379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.608 [2024-10-07 12:31:50.814492] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:27.608 [2024-10-07 12:31:50.814505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:27.608 [2024-10-07 12:31:50.814517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:27.608 [2024-10-07 12:31:50.814528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:27.608 [2024-10-07 12:31:50.814549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:27.608 [2024-10-07 12:31:50.814571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:27.608 [2024-10-07 12:31:50.814582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:27.608 [2024-10-07 12:31:50.814609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:27.608 [2024-10-07 12:31:50.814618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:27.608 [2024-10-07 12:31:50.814630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:27.608 [2024-10-07 12:31:50.814639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:27.608 [2024-10-07 12:31:50.814652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:27.608 [2024-10-07 12:31:50.814661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:27.608 [2024-10-07 12:31:50.814685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:27.608 [2024-10-07 12:31:50.814697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:27.608 [2024-10-07 12:31:50.814718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.608 [2024-10-07 12:31:50.814740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:27.608 [2024-10-07 12:31:50.814749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.608 [2024-10-07 12:31:50.814770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:27.608 [2024-10-07 12:31:50.814781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.608 [2024-10-07 12:31:50.814801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:27.608 [2024-10-07 12:31:50.814810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.608 [2024-10-07 12:31:50.814830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:27.608 [2024-10-07 12:31:50.814844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:27.608 [2024-10-07 12:31:50.814864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:27.608 [2024-10-07 12:31:50.814873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:27.608 [2024-10-07 12:31:50.814884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:27.608 [2024-10-07 12:31:50.814893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:27.608 [2024-10-07 12:31:50.814916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:27.608 [2024-10-07 12:31:50.814924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:27.608 [2024-10-07 12:31:50.814945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:27.608 [2024-10-07 12:31:50.814956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.608 [2024-10-07 12:31:50.814965] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:27.608 [2024-10-07 12:31:50.814984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:27.608 [2024-10-07 12:31:50.814996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:27.608 [2024-10-07 12:31:50.815010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.608 [2024-10-07 12:31:50.815020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:27.608 [2024-10-07 12:31:50.815037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:27.608 [2024-10-07 12:31:50.815046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:27.608 [2024-10-07 12:31:50.815058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:27.608 [2024-10-07 12:31:50.815066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:27.608 [2024-10-07 12:31:50.815079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:27.608 [2024-10-07 12:31:50.815098] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:27.608 [2024-10-07 12:31:50.815113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:27.608 [2024-10-07 12:31:50.815124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:27.608 [2024-10-07 12:31:50.815138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:27.608 [2024-10-07 12:31:50.815148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:27.608 [2024-10-07 12:31:50.815160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:27.608 [2024-10-07 12:31:50.815170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:27.608 [2024-10-07 12:31:50.815183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:27.608 [2024-10-07 12:31:50.815192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:27.608 [2024-10-07 12:31:50.815205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:27.608 [2024-10-07 12:31:50.815215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:27.608 [2024-10-07 12:31:50.815230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:27.608 [2024-10-07 12:31:50.815240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:27.608 [2024-10-07 12:31:50.815252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:27.608 [2024-10-07 12:31:50.815262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:27.608 [2024-10-07 12:31:50.815275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:27.608 [2024-10-07 12:31:50.815285] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:27.608 [2024-10-07 12:31:50.815299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:27.608 [2024-10-07 12:31:50.815310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:27.608 [2024-10-07 12:31:50.815324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:27.608 [2024-10-07 12:31:50.815334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:27.608 [2024-10-07 12:31:50.815347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:27.608 [2024-10-07 12:31:50.815357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.608 [2024-10-07 12:31:50.815370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:27.608 [2024-10-07 12:31:50.815380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:21:27.608 [2024-10-07 12:31:50.815393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.608 [2024-10-07 12:31:50.815475] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:27.608 [2024-10-07 12:31:50.815497] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:31.830 [2024-10-07 12:31:54.636109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.830 [2024-10-07 12:31:54.636193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:31.830 [2024-10-07 12:31:54.636211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3826.835 ms 00:21:31.830 [2024-10-07 12:31:54.636226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.830 [2024-10-07 12:31:54.683452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.830 [2024-10-07 12:31:54.683515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.830 [2024-10-07 12:31:54.683535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.883 ms 00:21:31.830 [2024-10-07 12:31:54.683552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.830 [2024-10-07 12:31:54.683717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.830 [2024-10-07 12:31:54.683736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:31.830 [2024-10-07 12:31:54.683749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:31.830 [2024-10-07 12:31:54.683768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.830 [2024-10-07 12:31:54.729189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.830 [2024-10-07 12:31:54.729239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.830 [2024-10-07 12:31:54.729253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.455 ms 00:21:31.830 [2024-10-07 12:31:54.729283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.830 [2024-10-07 12:31:54.729383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.830 [2024-10-07 12:31:54.729402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.830 [2024-10-07 12:31:54.729413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:31.830 [2024-10-07 12:31:54.729426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.830 [2024-10-07 12:31:54.729865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.830 [2024-10-07 12:31:54.729881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.830 [2024-10-07 12:31:54.729891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:21:31.830 [2024-10-07 12:31:54.729904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.830 [2024-10-07 12:31:54.730025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.830 [2024-10-07 12:31:54.730040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.830 [2024-10-07 12:31:54.730054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:31.830 [2024-10-07 12:31:54.730069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.830 [2024-10-07 12:31:54.750752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.830 [2024-10-07 12:31:54.750797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.830 [2024-10-07 12:31:54.750813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.684 ms 00:21:31.830 [2024-10-07 12:31:54.750841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.830 [2024-10-07 12:31:54.763603] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:31.830 [2024-10-07 12:31:54.779991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.830 [2024-10-07 12:31:54.780039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:31.830 [2024-10-07 12:31:54.780056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.068 ms 00:21:31.830 [2024-10-07 12:31:54.780067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.830 [2024-10-07 12:31:54.887024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.830 [2024-10-07 12:31:54.887084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:31.830 [2024-10-07 12:31:54.887103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.026 ms 00:21:31.830 [2024-10-07 12:31:54.887129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.830 [2024-10-07 12:31:54.887358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.831 [2024-10-07 12:31:54.887372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:31.831 [2024-10-07 12:31:54.887393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:21:31.831 [2024-10-07 12:31:54.887403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.831 [2024-10-07 12:31:54.923471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.831 [2024-10-07 12:31:54.923522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:31.831 [2024-10-07 12:31:54.923540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.089 ms 00:21:31.831 [2024-10-07 12:31:54.923567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.831 [2024-10-07 12:31:54.960319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.831 [2024-10-07 12:31:54.960357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:31.831 [2024-10-07 12:31:54.960374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.723 ms 00:21:31.831 [2024-10-07 12:31:54.960383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.831 [2024-10-07 12:31:54.961135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.831 [2024-10-07 12:31:54.961160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:31.831 [2024-10-07 12:31:54.961174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:21:31.831 [2024-10-07 12:31:54.961184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.831 [2024-10-07 12:31:55.077187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.831 [2024-10-07 12:31:55.077232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:31.831 [2024-10-07 12:31:55.077254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.148 ms 00:21:31.831 [2024-10-07 12:31:55.077265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.831 [2024-10-07 12:31:55.115470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.831 [2024-10-07 12:31:55.115517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:31.831 [2024-10-07 12:31:55.115534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.166 ms 00:21:31.831 [2024-10-07 12:31:55.115545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.090 [2024-10-07 12:31:55.152784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.090 [2024-10-07 12:31:55.152822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:32.090 [2024-10-07 12:31:55.152839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.221 ms 00:21:32.090 [2024-10-07 12:31:55.152865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.090 [2024-10-07 12:31:55.188927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.090 [2024-10-07 12:31:55.188964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:32.090 [2024-10-07 12:31:55.188981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.015 ms 00:21:32.090 [2024-10-07 12:31:55.188991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.090 [2024-10-07 12:31:55.189068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.090 [2024-10-07 12:31:55.189081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:32.090 [2024-10-07 12:31:55.189097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:32.090 [2024-10-07 12:31:55.189123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.090 [2024-10-07 12:31:55.189211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.090 [2024-10-07 12:31:55.189222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:32.090 [2024-10-07 12:31:55.189238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:32.090 [2024-10-07 12:31:55.189248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.090 [2024-10-07 12:31:55.190130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:32.090 [2024-10-07 12:31:55.194345] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4402.437 ms, result 0 00:21:32.090 [2024-10-07 12:31:55.195194] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:32.090 { 00:21:32.090 "name": "ftl0", 00:21:32.090 "uuid": "8095ccf6-8e51-4990-b00e-376620ab48a2" 00:21:32.090 } 00:21:32.090 12:31:55 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:32.090 12:31:55 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:21:32.090 12:31:55 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:32.090 12:31:55 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:21:32.090 12:31:55 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:32.090 12:31:55 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:32.090 12:31:55 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:32.350 12:31:55 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:32.350 [ 00:21:32.350 { 00:21:32.350 "name": "ftl0", 00:21:32.350 "aliases": [ 00:21:32.350 "8095ccf6-8e51-4990-b00e-376620ab48a2" 00:21:32.350 ], 00:21:32.350 "product_name": "FTL disk", 00:21:32.350 "block_size": 4096, 00:21:32.350 "num_blocks": 23592960, 00:21:32.350 "uuid": "8095ccf6-8e51-4990-b00e-376620ab48a2", 00:21:32.350 "assigned_rate_limits": { 00:21:32.350 "rw_ios_per_sec": 0, 00:21:32.350 "rw_mbytes_per_sec": 0, 00:21:32.350 "r_mbytes_per_sec": 0, 00:21:32.350 "w_mbytes_per_sec": 0 00:21:32.350 }, 00:21:32.350 "claimed": false, 00:21:32.350 "zoned": false, 00:21:32.350 "supported_io_types": { 00:21:32.350 "read": true, 00:21:32.350 "write": true, 00:21:32.350 "unmap": true, 00:21:32.350 "flush": true, 00:21:32.350 "reset": false, 00:21:32.350 "nvme_admin": false, 00:21:32.350 "nvme_io": false, 00:21:32.350 "nvme_io_md": false, 00:21:32.350 "write_zeroes": true, 00:21:32.350 "zcopy": false, 00:21:32.350 "get_zone_info": false, 00:21:32.350 "zone_management": false, 00:21:32.350 "zone_append": false, 00:21:32.350 "compare": false, 00:21:32.350 "compare_and_write": false, 00:21:32.350 "abort": false, 00:21:32.350 "seek_hole": false, 00:21:32.350 "seek_data": false, 00:21:32.350 "copy": false, 00:21:32.350 "nvme_iov_md": false 00:21:32.350 }, 00:21:32.350 "driver_specific": { 00:21:32.350 "ftl": { 00:21:32.350 "base_bdev": "945f45b3-6a8e-4837-99ad-6d12f23e163c", 00:21:32.350 "cache": "nvc0n1p0" 00:21:32.350 } 00:21:32.350 } 00:21:32.350 } 00:21:32.350 ] 00:21:32.350 12:31:55 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:21:32.350 12:31:55 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:32.350 12:31:55 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:32.610 12:31:55 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:32.610 12:31:55 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:32.869 12:31:56 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:32.869 { 00:21:32.869 "name": "ftl0", 00:21:32.869 "aliases": [ 00:21:32.869 "8095ccf6-8e51-4990-b00e-376620ab48a2" 00:21:32.869 ], 00:21:32.869 "product_name": "FTL disk", 00:21:32.869 "block_size": 4096, 00:21:32.869 "num_blocks": 23592960, 00:21:32.869 "uuid": "8095ccf6-8e51-4990-b00e-376620ab48a2", 00:21:32.869 "assigned_rate_limits": { 00:21:32.869 "rw_ios_per_sec": 0, 00:21:32.869 "rw_mbytes_per_sec": 0, 00:21:32.869 "r_mbytes_per_sec": 0, 00:21:32.869 "w_mbytes_per_sec": 0 00:21:32.869 }, 00:21:32.869 "claimed": false, 00:21:32.869 "zoned": false, 00:21:32.869 "supported_io_types": { 00:21:32.869 "read": true, 00:21:32.870 "write": true, 00:21:32.870 "unmap": true, 00:21:32.870 "flush": true, 00:21:32.870 "reset": false, 00:21:32.870 "nvme_admin": false, 00:21:32.870 "nvme_io": false, 00:21:32.870 "nvme_io_md": false, 00:21:32.870 "write_zeroes": true, 00:21:32.870 "zcopy": false, 00:21:32.870 "get_zone_info": false, 00:21:32.870 "zone_management": false, 00:21:32.870 "zone_append": false, 00:21:32.870 "compare": false, 00:21:32.870 "compare_and_write": false, 00:21:32.870 "abort": false, 00:21:32.870 "seek_hole": false, 00:21:32.870 "seek_data": false, 00:21:32.870 "copy": false, 00:21:32.870 "nvme_iov_md": false 00:21:32.870 }, 00:21:32.870 "driver_specific": { 00:21:32.870 "ftl": { 00:21:32.870 "base_bdev": "945f45b3-6a8e-4837-99ad-6d12f23e163c", 00:21:32.870 "cache": "nvc0n1p0" 00:21:32.870 } 00:21:32.870 } 00:21:32.870 } 00:21:32.870 ]' 00:21:32.870 12:31:56 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:32.870 12:31:56 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:32.870 12:31:56 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:33.130 [2024-10-07 12:31:56.226343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.226395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:33.130 [2024-10-07 12:31:56.226411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:33.130 [2024-10-07 12:31:56.226424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.130 [2024-10-07 12:31:56.226460] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:33.130 [2024-10-07 12:31:56.230582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.230616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:33.130 [2024-10-07 12:31:56.230635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.108 ms 00:21:33.130 [2024-10-07 12:31:56.230646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.130 [2024-10-07 12:31:56.231198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.231229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:33.130 [2024-10-07 12:31:56.231243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:21:33.130 [2024-10-07 12:31:56.231253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.130 [2024-10-07 12:31:56.234072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.234096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:33.130 [2024-10-07 12:31:56.234110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.793 ms 00:21:33.130 [2024-10-07 12:31:56.234120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.130 [2024-10-07 12:31:56.239732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.239765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:33.130 [2024-10-07 12:31:56.239782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.586 ms 00:21:33.130 [2024-10-07 12:31:56.239792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.130 [2024-10-07 12:31:56.276653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.276704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:33.130 [2024-10-07 12:31:56.276725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.841 ms 00:21:33.130 [2024-10-07 12:31:56.276736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.130 [2024-10-07 12:31:56.299274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.299315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:33.130 [2024-10-07 12:31:56.299332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.442 ms 00:21:33.130 [2024-10-07 12:31:56.299343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.130 [2024-10-07 12:31:56.299557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.299571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:33.130 [2024-10-07 12:31:56.299584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:33.130 [2024-10-07 12:31:56.299595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.130 [2024-10-07 12:31:56.334789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.334826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:33.130 [2024-10-07 12:31:56.334842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.214 ms 00:21:33.130 [2024-10-07 12:31:56.334852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.130 [2024-10-07 12:31:56.370604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.370639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:33.130 [2024-10-07 12:31:56.370658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.710 ms 00:21:33.130 [2024-10-07 12:31:56.370684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.130 [2024-10-07 12:31:56.405976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.130 [2024-10-07 12:31:56.406015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:33.130 [2024-10-07 12:31:56.406030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.084 ms 00:21:33.130 [2024-10-07 12:31:56.406039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.391 [2024-10-07 12:31:56.440992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.391 [2024-10-07 12:31:56.441032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:33.391 [2024-10-07 12:31:56.441048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.860 ms 00:21:33.391 [2024-10-07 12:31:56.441058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.391 [2024-10-07 12:31:56.441157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:33.391 [2024-10-07 12:31:56.441175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.441999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.442009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.442022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.442032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.442049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:33.391 [2024-10-07 12:31:56.442060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:33.392 [2024-10-07 12:31:56.442426] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:33.392 [2024-10-07 12:31:56.442441] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8095ccf6-8e51-4990-b00e-376620ab48a2 00:21:33.392 [2024-10-07 12:31:56.442452] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:33.392 [2024-10-07 12:31:56.442464] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:33.392 [2024-10-07 12:31:56.442474] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:33.392 [2024-10-07 12:31:56.442486] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:33.392 [2024-10-07 12:31:56.442496] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:33.392 [2024-10-07 12:31:56.442509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:33.392 [2024-10-07 12:31:56.442518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:33.392 [2024-10-07 12:31:56.442530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:33.392 [2024-10-07 12:31:56.442538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:33.392 [2024-10-07 12:31:56.442552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.392 [2024-10-07 12:31:56.442562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:33.392 [2024-10-07 12:31:56.442575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:21:33.392 [2024-10-07 12:31:56.442587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.392 [2024-10-07 12:31:56.462151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.392 [2024-10-07 12:31:56.462187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:33.392 [2024-10-07 12:31:56.462205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.557 ms 00:21:33.392 [2024-10-07 12:31:56.462215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.392 [2024-10-07 12:31:56.462811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.392 [2024-10-07 12:31:56.462829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:33.392 [2024-10-07 12:31:56.462846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:21:33.392 [2024-10-07 12:31:56.462856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.392 [2024-10-07 12:31:56.530241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.392 [2024-10-07 12:31:56.530282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:33.392 [2024-10-07 12:31:56.530298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.392 [2024-10-07 12:31:56.530308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.392 [2024-10-07 12:31:56.530441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.392 [2024-10-07 12:31:56.530454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:33.392 [2024-10-07 12:31:56.530470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.392 [2024-10-07 12:31:56.530479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.392 [2024-10-07 12:31:56.530552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.392 [2024-10-07 12:31:56.530565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:33.392 [2024-10-07 12:31:56.530580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.392 [2024-10-07 12:31:56.530591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.392 [2024-10-07 12:31:56.530626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.392 [2024-10-07 12:31:56.530637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:33.392 [2024-10-07 12:31:56.530649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.392 [2024-10-07 12:31:56.530662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.392 [2024-10-07 12:31:56.659194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.392 [2024-10-07 12:31:56.659252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:33.392 [2024-10-07 12:31:56.659269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.392 [2024-10-07 12:31:56.659279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.652 [2024-10-07 12:31:56.756368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.652 [2024-10-07 12:31:56.756424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:33.652 [2024-10-07 12:31:56.756440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.652 [2024-10-07 12:31:56.756470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.652 [2024-10-07 12:31:56.756570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.652 [2024-10-07 12:31:56.756582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:33.652 [2024-10-07 12:31:56.756599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.652 [2024-10-07 12:31:56.756609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.652 [2024-10-07 12:31:56.756669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.652 [2024-10-07 12:31:56.756680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:33.652 [2024-10-07 12:31:56.756708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.652 [2024-10-07 12:31:56.756718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.652 [2024-10-07 12:31:56.756864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.652 [2024-10-07 12:31:56.756878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:33.652 [2024-10-07 12:31:56.756890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.652 [2024-10-07 12:31:56.756917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.652 [2024-10-07 12:31:56.756973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.652 [2024-10-07 12:31:56.756986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:33.652 [2024-10-07 12:31:56.756999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.652 [2024-10-07 12:31:56.757009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.652 [2024-10-07 12:31:56.757064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.652 [2024-10-07 12:31:56.757075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:33.652 [2024-10-07 12:31:56.757090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.652 [2024-10-07 12:31:56.757100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.652 [2024-10-07 12:31:56.757158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.652 [2024-10-07 12:31:56.757172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:33.652 [2024-10-07 12:31:56.757185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.652 [2024-10-07 12:31:56.757195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.652 [2024-10-07 12:31:56.757383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.886 ms, result 0 00:21:33.652 true 00:21:33.652 12:31:56 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75823 00:21:33.652 12:31:56 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75823 ']' 00:21:33.652 12:31:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75823 00:21:33.652 12:31:56 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:21:33.652 12:31:56 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:33.652 12:31:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75823 00:21:33.652 12:31:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:33.652 killing process with pid 75823 00:21:33.652 12:31:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:33.652 12:31:56 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75823' 00:21:33.652 12:31:56 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 75823 00:21:33.652 12:31:56 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 75823 00:21:38.930 12:32:01 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:39.870 65536+0 records in 00:21:39.870 65536+0 records out 00:21:39.870 268435456 bytes (268 MB, 256 MiB) copied, 0.95154 s, 282 MB/s 00:21:39.870 12:32:02 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:39.870 [2024-10-07 12:32:03.010440] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:21:39.870 [2024-10-07 12:32:03.010571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76035 ] 00:21:40.130 [2024-10-07 12:32:03.174390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.130 [2024-10-07 12:32:03.374811] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.700 [2024-10-07 12:32:03.719671] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:40.700 [2024-10-07 12:32:03.719732] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:40.700 [2024-10-07 12:32:03.881148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.881199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:40.700 [2024-10-07 12:32:03.881220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:40.700 [2024-10-07 12:32:03.881230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.884355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.884391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:40.700 [2024-10-07 12:32:03.884404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.109 ms 00:21:40.700 [2024-10-07 12:32:03.884417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.884511] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:40.700 [2024-10-07 12:32:03.885491] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:40.700 [2024-10-07 12:32:03.885521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.885535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:40.700 [2024-10-07 12:32:03.885547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:21:40.700 [2024-10-07 12:32:03.885557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.887059] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:40.700 [2024-10-07 12:32:03.906086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.906120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:40.700 [2024-10-07 12:32:03.906150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.059 ms 00:21:40.700 [2024-10-07 12:32:03.906161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.906268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.906282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:40.700 [2024-10-07 12:32:03.906297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:40.700 [2024-10-07 12:32:03.906306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.913299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.913327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:40.700 [2024-10-07 12:32:03.913339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:21:40.700 [2024-10-07 12:32:03.913365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.913470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.913489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:40.700 [2024-10-07 12:32:03.913501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:40.700 [2024-10-07 12:32:03.913511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.913539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.913550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:40.700 [2024-10-07 12:32:03.913560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:40.700 [2024-10-07 12:32:03.913570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.913595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:40.700 [2024-10-07 12:32:03.918286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.918312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:40.700 [2024-10-07 12:32:03.918323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.706 ms 00:21:40.700 [2024-10-07 12:32:03.918349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.918417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.918433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:40.700 [2024-10-07 12:32:03.918444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:40.700 [2024-10-07 12:32:03.918454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.918477] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:40.700 [2024-10-07 12:32:03.918498] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:40.700 [2024-10-07 12:32:03.918532] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:40.700 [2024-10-07 12:32:03.918549] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:40.700 [2024-10-07 12:32:03.918639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:40.700 [2024-10-07 12:32:03.918651] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:40.700 [2024-10-07 12:32:03.918664] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:40.700 [2024-10-07 12:32:03.918677] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:40.700 [2024-10-07 12:32:03.918689] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:40.700 [2024-10-07 12:32:03.918715] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:40.700 [2024-10-07 12:32:03.918725] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:40.700 [2024-10-07 12:32:03.918735] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:40.700 [2024-10-07 12:32:03.918745] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:40.700 [2024-10-07 12:32:03.918755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.918768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:40.700 [2024-10-07 12:32:03.918778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:21:40.700 [2024-10-07 12:32:03.918788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.700 [2024-10-07 12:32:03.918864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.700 [2024-10-07 12:32:03.918874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:40.700 [2024-10-07 12:32:03.918885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:40.701 [2024-10-07 12:32:03.918895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.701 [2024-10-07 12:32:03.919000] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:40.701 [2024-10-07 12:32:03.919014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:40.701 [2024-10-07 12:32:03.919028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:40.701 [2024-10-07 12:32:03.919039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:40.701 [2024-10-07 12:32:03.919061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:40.701 [2024-10-07 12:32:03.919081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:40.701 [2024-10-07 12:32:03.919090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:40.701 [2024-10-07 12:32:03.919109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:40.701 [2024-10-07 12:32:03.919129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:40.701 [2024-10-07 12:32:03.919138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:40.701 [2024-10-07 12:32:03.919147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:40.701 [2024-10-07 12:32:03.919157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:40.701 [2024-10-07 12:32:03.919166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:40.701 [2024-10-07 12:32:03.919185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:40.701 [2024-10-07 12:32:03.919194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:40.701 [2024-10-07 12:32:03.919212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.701 [2024-10-07 12:32:03.919231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:40.701 [2024-10-07 12:32:03.919240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.701 [2024-10-07 12:32:03.919258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:40.701 [2024-10-07 12:32:03.919268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.701 [2024-10-07 12:32:03.919287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:40.701 [2024-10-07 12:32:03.919296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.701 [2024-10-07 12:32:03.919315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:40.701 [2024-10-07 12:32:03.919324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:40.701 [2024-10-07 12:32:03.919342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:40.701 [2024-10-07 12:32:03.919351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:40.701 [2024-10-07 12:32:03.919361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:40.701 [2024-10-07 12:32:03.919370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:40.701 [2024-10-07 12:32:03.919379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:40.701 [2024-10-07 12:32:03.919388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:40.701 [2024-10-07 12:32:03.919407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:40.701 [2024-10-07 12:32:03.919416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919425] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:40.701 [2024-10-07 12:32:03.919435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:40.701 [2024-10-07 12:32:03.919444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:40.701 [2024-10-07 12:32:03.919454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.701 [2024-10-07 12:32:03.919464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:40.701 [2024-10-07 12:32:03.919473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:40.701 [2024-10-07 12:32:03.919483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:40.701 [2024-10-07 12:32:03.919492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:40.701 [2024-10-07 12:32:03.919501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:40.701 [2024-10-07 12:32:03.919510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:40.701 [2024-10-07 12:32:03.919521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:40.701 [2024-10-07 12:32:03.919537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:40.701 [2024-10-07 12:32:03.919549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:40.701 [2024-10-07 12:32:03.919560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:40.701 [2024-10-07 12:32:03.919570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:40.701 [2024-10-07 12:32:03.919580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:40.701 [2024-10-07 12:32:03.919591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:40.701 [2024-10-07 12:32:03.919601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:40.701 [2024-10-07 12:32:03.919611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:40.701 [2024-10-07 12:32:03.919621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:40.701 [2024-10-07 12:32:03.919631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:40.701 [2024-10-07 12:32:03.919641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:40.701 [2024-10-07 12:32:03.919651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:40.701 [2024-10-07 12:32:03.919661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:40.701 [2024-10-07 12:32:03.919672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:40.701 [2024-10-07 12:32:03.919686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:40.701 [2024-10-07 12:32:03.919696] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:40.701 [2024-10-07 12:32:03.919707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:40.701 [2024-10-07 12:32:03.919719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:40.701 [2024-10-07 12:32:03.919729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:40.701 [2024-10-07 12:32:03.919740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:40.701 [2024-10-07 12:32:03.919750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:40.701 [2024-10-07 12:32:03.919761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.701 [2024-10-07 12:32:03.919775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:40.701 [2024-10-07 12:32:03.919785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:21:40.701 [2024-10-07 12:32:03.919795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.701 [2024-10-07 12:32:03.967049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.701 [2024-10-07 12:32:03.967094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:40.701 [2024-10-07 12:32:03.967109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.275 ms 00:21:40.701 [2024-10-07 12:32:03.967120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.701 [2024-10-07 12:32:03.967251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.701 [2024-10-07 12:32:03.967264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:40.701 [2024-10-07 12:32:03.967275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:40.701 [2024-10-07 12:32:03.967285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.961 [2024-10-07 12:32:04.011097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.961 [2024-10-07 12:32:04.011139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.961 [2024-10-07 12:32:04.011154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.859 ms 00:21:40.961 [2024-10-07 12:32:04.011164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.961 [2024-10-07 12:32:04.011272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.961 [2024-10-07 12:32:04.011284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:40.961 [2024-10-07 12:32:04.011296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:40.961 [2024-10-07 12:32:04.011305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.961 [2024-10-07 12:32:04.011746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.961 [2024-10-07 12:32:04.011759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:40.961 [2024-10-07 12:32:04.011770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:21:40.961 [2024-10-07 12:32:04.011780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.961 [2024-10-07 12:32:04.011896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.961 [2024-10-07 12:32:04.011930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:40.961 [2024-10-07 12:32:04.011941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:40.961 [2024-10-07 12:32:04.011951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.961 [2024-10-07 12:32:04.030408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.961 [2024-10-07 12:32:04.030443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:40.961 [2024-10-07 12:32:04.030457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.463 ms 00:21:40.961 [2024-10-07 12:32:04.030483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.961 [2024-10-07 12:32:04.049196] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:40.961 [2024-10-07 12:32:04.049371] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:40.961 [2024-10-07 12:32:04.049398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.961 [2024-10-07 12:32:04.049409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:40.961 [2024-10-07 12:32:04.049421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.821 ms 00:21:40.961 [2024-10-07 12:32:04.049431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.961 [2024-10-07 12:32:04.079684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.961 [2024-10-07 12:32:04.079729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:40.961 [2024-10-07 12:32:04.079744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.223 ms 00:21:40.961 [2024-10-07 12:32:04.079761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.961 [2024-10-07 12:32:04.098346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.961 [2024-10-07 12:32:04.098509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:40.961 [2024-10-07 12:32:04.098532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.520 ms 00:21:40.961 [2024-10-07 12:32:04.098543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.961 [2024-10-07 12:32:04.117071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.961 [2024-10-07 12:32:04.117224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:40.961 [2024-10-07 12:32:04.117245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.388 ms 00:21:40.961 [2024-10-07 12:32:04.117256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.962 [2024-10-07 12:32:04.118073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.962 [2024-10-07 12:32:04.118097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:40.962 [2024-10-07 12:32:04.118109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:21:40.962 [2024-10-07 12:32:04.118120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.962 [2024-10-07 12:32:04.203765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.962 [2024-10-07 12:32:04.203982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:40.962 [2024-10-07 12:32:04.204007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.755 ms 00:21:40.962 [2024-10-07 12:32:04.204019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.962 [2024-10-07 12:32:04.214961] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:40.962 [2024-10-07 12:32:04.231345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.962 [2024-10-07 12:32:04.231401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:40.962 [2024-10-07 12:32:04.231416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.206 ms 00:21:40.962 [2024-10-07 12:32:04.231442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.962 [2024-10-07 12:32:04.231571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.962 [2024-10-07 12:32:04.231585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:40.962 [2024-10-07 12:32:04.231596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:40.962 [2024-10-07 12:32:04.231606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.962 [2024-10-07 12:32:04.231663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.962 [2024-10-07 12:32:04.231674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:40.962 [2024-10-07 12:32:04.231688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:40.962 [2024-10-07 12:32:04.231698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.962 [2024-10-07 12:32:04.231720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.962 [2024-10-07 12:32:04.231730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:40.962 [2024-10-07 12:32:04.231740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:40.962 [2024-10-07 12:32:04.231750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.962 [2024-10-07 12:32:04.231785] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:40.962 [2024-10-07 12:32:04.231811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.962 [2024-10-07 12:32:04.231821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:40.962 [2024-10-07 12:32:04.231831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:40.962 [2024-10-07 12:32:04.231844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.221 [2024-10-07 12:32:04.267648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.221 [2024-10-07 12:32:04.267689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:41.221 [2024-10-07 12:32:04.267704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.840 ms 00:21:41.221 [2024-10-07 12:32:04.267715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.221 [2024-10-07 12:32:04.267826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.221 [2024-10-07 12:32:04.267839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:41.221 [2024-10-07 12:32:04.267855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:41.221 [2024-10-07 12:32:04.267865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.221 [2024-10-07 12:32:04.268834] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:41.221 [2024-10-07 12:32:04.273096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 388.024 ms, result 0 00:21:41.221 [2024-10-07 12:32:04.273992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:41.221 [2024-10-07 12:32:04.291540] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:42.157  [2024-10-07T12:32:06.384Z] Copying: 21/256 [MB] (21 MBps) [2024-10-07T12:32:07.321Z] Copying: 44/256 [MB] (22 MBps) [2024-10-07T12:32:08.697Z] Copying: 66/256 [MB] (22 MBps) [2024-10-07T12:32:09.634Z] Copying: 88/256 [MB] (21 MBps) [2024-10-07T12:32:10.571Z] Copying: 109/256 [MB] (21 MBps) [2024-10-07T12:32:11.509Z] Copying: 132/256 [MB] (22 MBps) [2024-10-07T12:32:12.447Z] Copying: 154/256 [MB] (21 MBps) [2024-10-07T12:32:13.386Z] Copying: 176/256 [MB] (22 MBps) [2024-10-07T12:32:14.324Z] Copying: 198/256 [MB] (21 MBps) [2024-10-07T12:32:15.700Z] Copying: 219/256 [MB] (21 MBps) [2024-10-07T12:32:16.269Z] Copying: 240/256 [MB] (21 MBps) [2024-10-07T12:32:16.269Z] Copying: 256/256 [MB] (average 21 MBps)[2024-10-07 12:32:16.025614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:52.978 [2024-10-07 12:32:16.040595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.978 [2024-10-07 12:32:16.040655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:52.978 [2024-10-07 12:32:16.040675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:52.978 [2024-10-07 12:32:16.040687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.978 [2024-10-07 12:32:16.040728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:52.978 [2024-10-07 12:32:16.044896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.978 [2024-10-07 12:32:16.044946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:52.978 [2024-10-07 12:32:16.044962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.153 ms 00:21:52.978 [2024-10-07 12:32:16.044975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.978 [2024-10-07 12:32:16.047276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.978 [2024-10-07 12:32:16.047327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:52.978 [2024-10-07 12:32:16.047352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.272 ms 00:21:52.978 [2024-10-07 12:32:16.047364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.978 [2024-10-07 12:32:16.054211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.978 [2024-10-07 12:32:16.054257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:52.979 [2024-10-07 12:32:16.054273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.834 ms 00:21:52.979 [2024-10-07 12:32:16.054286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.979 [2024-10-07 12:32:16.059976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.979 [2024-10-07 12:32:16.060147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:52.979 [2024-10-07 12:32:16.060173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.655 ms 00:21:52.979 [2024-10-07 12:32:16.060195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.979 [2024-10-07 12:32:16.096814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.979 [2024-10-07 12:32:16.096881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:52.979 [2024-10-07 12:32:16.096921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.613 ms 00:21:52.979 [2024-10-07 12:32:16.096935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.979 [2024-10-07 12:32:16.118303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.979 [2024-10-07 12:32:16.118350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:52.979 [2024-10-07 12:32:16.118367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.319 ms 00:21:52.979 [2024-10-07 12:32:16.118380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.979 [2024-10-07 12:32:16.118544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.979 [2024-10-07 12:32:16.118560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:52.979 [2024-10-07 12:32:16.118573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:52.979 [2024-10-07 12:32:16.118585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.979 [2024-10-07 12:32:16.153485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.979 [2024-10-07 12:32:16.153552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:52.979 [2024-10-07 12:32:16.153568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.935 ms 00:21:52.979 [2024-10-07 12:32:16.153580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.979 [2024-10-07 12:32:16.188116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.979 [2024-10-07 12:32:16.188171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:52.979 [2024-10-07 12:32:16.188187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.531 ms 00:21:52.979 [2024-10-07 12:32:16.188215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.979 [2024-10-07 12:32:16.221497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.979 [2024-10-07 12:32:16.221667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:52.979 [2024-10-07 12:32:16.221707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.274 ms 00:21:52.979 [2024-10-07 12:32:16.221719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.979 [2024-10-07 12:32:16.256082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.979 [2024-10-07 12:32:16.256124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:52.979 [2024-10-07 12:32:16.256139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.305 ms 00:21:52.979 [2024-10-07 12:32:16.256150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.979 [2024-10-07 12:32:16.256210] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:52.979 [2024-10-07 12:32:16.256230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.256989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.257001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.257013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.257027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.257039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:52.979 [2024-10-07 12:32:16.257051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:52.980 [2024-10-07 12:32:16.257527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:52.980 [2024-10-07 12:32:16.257539] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8095ccf6-8e51-4990-b00e-376620ab48a2 00:21:52.980 [2024-10-07 12:32:16.257551] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:52.980 [2024-10-07 12:32:16.257563] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:52.980 [2024-10-07 12:32:16.257576] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:52.980 [2024-10-07 12:32:16.257588] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:52.980 [2024-10-07 12:32:16.257605] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:52.980 [2024-10-07 12:32:16.257617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:52.980 [2024-10-07 12:32:16.257629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:52.980 [2024-10-07 12:32:16.257640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:52.980 [2024-10-07 12:32:16.257651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:52.980 [2024-10-07 12:32:16.257663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.980 [2024-10-07 12:32:16.257674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:52.980 [2024-10-07 12:32:16.257687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:21:52.980 [2024-10-07 12:32:16.257699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.239 [2024-10-07 12:32:16.277344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.239 [2024-10-07 12:32:16.277382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:53.239 [2024-10-07 12:32:16.277403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.651 ms 00:21:53.239 [2024-10-07 12:32:16.277431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.239 [2024-10-07 12:32:16.278005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.239 [2024-10-07 12:32:16.278021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:53.239 [2024-10-07 12:32:16.278035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:21:53.240 [2024-10-07 12:32:16.278047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.240 [2024-10-07 12:32:16.324678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.240 [2024-10-07 12:32:16.324727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:53.240 [2024-10-07 12:32:16.324742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.240 [2024-10-07 12:32:16.324755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.240 [2024-10-07 12:32:16.324860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.240 [2024-10-07 12:32:16.324876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:53.240 [2024-10-07 12:32:16.324890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.240 [2024-10-07 12:32:16.324921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.240 [2024-10-07 12:32:16.324993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.240 [2024-10-07 12:32:16.325007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:53.240 [2024-10-07 12:32:16.325025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.240 [2024-10-07 12:32:16.325037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.240 [2024-10-07 12:32:16.325060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.240 [2024-10-07 12:32:16.325072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:53.240 [2024-10-07 12:32:16.325092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.240 [2024-10-07 12:32:16.325104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.240 [2024-10-07 12:32:16.444881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.240 [2024-10-07 12:32:16.444957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:53.240 [2024-10-07 12:32:16.444982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.240 [2024-10-07 12:32:16.444995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.499 [2024-10-07 12:32:16.543205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.499 [2024-10-07 12:32:16.543274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.499 [2024-10-07 12:32:16.543291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.499 [2024-10-07 12:32:16.543303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.499 [2024-10-07 12:32:16.543379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.499 [2024-10-07 12:32:16.543393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:53.499 [2024-10-07 12:32:16.543406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.499 [2024-10-07 12:32:16.543425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.499 [2024-10-07 12:32:16.543457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.499 [2024-10-07 12:32:16.543469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:53.499 [2024-10-07 12:32:16.543481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.499 [2024-10-07 12:32:16.543493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.499 [2024-10-07 12:32:16.543616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.499 [2024-10-07 12:32:16.543631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:53.499 [2024-10-07 12:32:16.543643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.499 [2024-10-07 12:32:16.543655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.499 [2024-10-07 12:32:16.543704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.499 [2024-10-07 12:32:16.543718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:53.499 [2024-10-07 12:32:16.543730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.499 [2024-10-07 12:32:16.543742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.499 [2024-10-07 12:32:16.543784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.499 [2024-10-07 12:32:16.543796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:53.499 [2024-10-07 12:32:16.543809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.499 [2024-10-07 12:32:16.543820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.499 [2024-10-07 12:32:16.543871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.499 [2024-10-07 12:32:16.543885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:53.499 [2024-10-07 12:32:16.543937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.499 [2024-10-07 12:32:16.543966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.499 [2024-10-07 12:32:16.544130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 504.354 ms, result 0 00:21:54.877 00:21:54.877 00:21:54.877 12:32:17 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76185 00:21:54.877 12:32:17 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76185 00:21:54.877 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76185 ']' 00:21:54.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.877 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.877 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.877 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.877 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.877 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:54.877 12:32:17 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:54.877 [2024-10-07 12:32:18.011607] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:21:54.877 [2024-10-07 12:32:18.011740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76185 ] 00:21:55.136 [2024-10-07 12:32:18.182623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.136 [2024-10-07 12:32:18.410100] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.072 12:32:19 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:56.072 12:32:19 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:21:56.072 12:32:19 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:56.331 [2024-10-07 12:32:19.470222] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:56.331 [2024-10-07 12:32:19.470579] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:56.592 [2024-10-07 12:32:19.652605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.652676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:56.592 [2024-10-07 12:32:19.652702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:56.592 [2024-10-07 12:32:19.652722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.656556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.656760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:56.592 [2024-10-07 12:32:19.656794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.812 ms 00:21:56.592 [2024-10-07 12:32:19.656807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.657012] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:56.592 [2024-10-07 12:32:19.658028] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:56.592 [2024-10-07 12:32:19.658071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.658085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:56.592 [2024-10-07 12:32:19.658101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:21:56.592 [2024-10-07 12:32:19.658113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.659665] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:56.592 [2024-10-07 12:32:19.679755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.680031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:56.592 [2024-10-07 12:32:19.680060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.125 ms 00:21:56.592 [2024-10-07 12:32:19.680080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.680223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.680265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:56.592 [2024-10-07 12:32:19.680280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:56.592 [2024-10-07 12:32:19.680299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.687670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.687727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:56.592 [2024-10-07 12:32:19.687759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.318 ms 00:21:56.592 [2024-10-07 12:32:19.687778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.687976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.687999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:56.592 [2024-10-07 12:32:19.688013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:21:56.592 [2024-10-07 12:32:19.688029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.688066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.688084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:56.592 [2024-10-07 12:32:19.688097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:56.592 [2024-10-07 12:32:19.688112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.688145] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:56.592 [2024-10-07 12:32:19.693158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.693201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:56.592 [2024-10-07 12:32:19.693219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.025 ms 00:21:56.592 [2024-10-07 12:32:19.693236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.693329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.693343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:56.592 [2024-10-07 12:32:19.693360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:56.592 [2024-10-07 12:32:19.693372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.693411] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:56.592 [2024-10-07 12:32:19.693440] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:56.592 [2024-10-07 12:32:19.693498] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:56.592 [2024-10-07 12:32:19.693529] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:56.592 [2024-10-07 12:32:19.693627] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:56.592 [2024-10-07 12:32:19.693643] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:56.592 [2024-10-07 12:32:19.693668] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:56.592 [2024-10-07 12:32:19.693685] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:56.592 [2024-10-07 12:32:19.693706] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:56.592 [2024-10-07 12:32:19.693720] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:56.592 [2024-10-07 12:32:19.693739] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:56.592 [2024-10-07 12:32:19.693752] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:56.592 [2024-10-07 12:32:19.693774] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:56.592 [2024-10-07 12:32:19.693792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.693811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:56.592 [2024-10-07 12:32:19.693824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:21:56.592 [2024-10-07 12:32:19.693843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.693943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.592 [2024-10-07 12:32:19.693965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:56.592 [2024-10-07 12:32:19.693979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:56.592 [2024-10-07 12:32:19.693996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.592 [2024-10-07 12:32:19.694105] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:56.592 [2024-10-07 12:32:19.694136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:56.592 [2024-10-07 12:32:19.694149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:56.592 [2024-10-07 12:32:19.694168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:56.593 [2024-10-07 12:32:19.694198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:56.593 [2024-10-07 12:32:19.694235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:56.593 [2024-10-07 12:32:19.694247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:56.593 [2024-10-07 12:32:19.694276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:56.593 [2024-10-07 12:32:19.694294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:56.593 [2024-10-07 12:32:19.694306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:56.593 [2024-10-07 12:32:19.694324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:56.593 [2024-10-07 12:32:19.694336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:56.593 [2024-10-07 12:32:19.694353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:56.593 [2024-10-07 12:32:19.694383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:56.593 [2024-10-07 12:32:19.694407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:56.593 [2024-10-07 12:32:19.694437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.593 [2024-10-07 12:32:19.694466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:56.593 [2024-10-07 12:32:19.694488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.593 [2024-10-07 12:32:19.694514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:56.593 [2024-10-07 12:32:19.694525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.593 [2024-10-07 12:32:19.694551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:56.593 [2024-10-07 12:32:19.694565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.593 [2024-10-07 12:32:19.694595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:56.593 [2024-10-07 12:32:19.694607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:56.593 [2024-10-07 12:32:19.694634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:56.593 [2024-10-07 12:32:19.694648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:56.593 [2024-10-07 12:32:19.694659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:56.593 [2024-10-07 12:32:19.694673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:56.593 [2024-10-07 12:32:19.694684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:56.593 [2024-10-07 12:32:19.694700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:56.593 [2024-10-07 12:32:19.694727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:56.593 [2024-10-07 12:32:19.694738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694752] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:56.593 [2024-10-07 12:32:19.694764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:56.593 [2024-10-07 12:32:19.694778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:56.593 [2024-10-07 12:32:19.694791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.593 [2024-10-07 12:32:19.694807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:56.593 [2024-10-07 12:32:19.694818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:56.593 [2024-10-07 12:32:19.694832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:56.593 [2024-10-07 12:32:19.694844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:56.593 [2024-10-07 12:32:19.694858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:56.593 [2024-10-07 12:32:19.694869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:56.593 [2024-10-07 12:32:19.694885] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:56.593 [2024-10-07 12:32:19.694910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:56.593 [2024-10-07 12:32:19.694937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:56.593 [2024-10-07 12:32:19.694950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:56.593 [2024-10-07 12:32:19.694968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:56.593 [2024-10-07 12:32:19.694989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:56.593 [2024-10-07 12:32:19.695009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:56.593 [2024-10-07 12:32:19.695022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:56.593 [2024-10-07 12:32:19.695040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:56.593 [2024-10-07 12:32:19.695052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:56.593 [2024-10-07 12:32:19.695072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:56.593 [2024-10-07 12:32:19.695085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:56.593 [2024-10-07 12:32:19.695103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:56.593 [2024-10-07 12:32:19.695116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:56.593 [2024-10-07 12:32:19.695135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:56.593 [2024-10-07 12:32:19.695148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:56.593 [2024-10-07 12:32:19.695166] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:56.593 [2024-10-07 12:32:19.695180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:56.593 [2024-10-07 12:32:19.695211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:56.593 [2024-10-07 12:32:19.695224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:56.593 [2024-10-07 12:32:19.695243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:56.593 [2024-10-07 12:32:19.695256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:56.593 [2024-10-07 12:32:19.695275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.593 [2024-10-07 12:32:19.695288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:56.593 [2024-10-07 12:32:19.695307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:21:56.593 [2024-10-07 12:32:19.695319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.593 [2024-10-07 12:32:19.735923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.593 [2024-10-07 12:32:19.735986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:56.593 [2024-10-07 12:32:19.736028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.558 ms 00:21:56.593 [2024-10-07 12:32:19.736041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.593 [2024-10-07 12:32:19.736219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.593 [2024-10-07 12:32:19.736234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:56.593 [2024-10-07 12:32:19.736254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:56.593 [2024-10-07 12:32:19.736267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.593 [2024-10-07 12:32:19.794163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.593 [2024-10-07 12:32:19.794230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:56.593 [2024-10-07 12:32:19.794272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.949 ms 00:21:56.593 [2024-10-07 12:32:19.794286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.593 [2024-10-07 12:32:19.794454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.593 [2024-10-07 12:32:19.794470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:56.593 [2024-10-07 12:32:19.794498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:56.593 [2024-10-07 12:32:19.794526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.593 [2024-10-07 12:32:19.795073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.593 [2024-10-07 12:32:19.795095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:56.593 [2024-10-07 12:32:19.795120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:21:56.593 [2024-10-07 12:32:19.795136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.593 [2024-10-07 12:32:19.795304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.593 [2024-10-07 12:32:19.795324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:56.593 [2024-10-07 12:32:19.795349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:21:56.593 [2024-10-07 12:32:19.795373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.593 [2024-10-07 12:32:19.818843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.593 [2024-10-07 12:32:19.818916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:56.593 [2024-10-07 12:32:19.818959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.463 ms 00:21:56.593 [2024-10-07 12:32:19.818985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.593 [2024-10-07 12:32:19.838265] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:56.593 [2024-10-07 12:32:19.838341] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:56.593 [2024-10-07 12:32:19.838385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.594 [2024-10-07 12:32:19.838398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:56.594 [2024-10-07 12:32:19.838420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.250 ms 00:21:56.594 [2024-10-07 12:32:19.838432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.594 [2024-10-07 12:32:19.868464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.594 [2024-10-07 12:32:19.868715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:56.594 [2024-10-07 12:32:19.868755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.958 ms 00:21:56.594 [2024-10-07 12:32:19.868783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:19.887285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:19.887476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:56.853 [2024-10-07 12:32:19.887519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.414 ms 00:21:56.853 [2024-10-07 12:32:19.887532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:19.905594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:19.905642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:56.853 [2024-10-07 12:32:19.905683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.987 ms 00:21:56.853 [2024-10-07 12:32:19.905695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:19.906542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:19.906583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:56.853 [2024-10-07 12:32:19.906604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:21:56.853 [2024-10-07 12:32:19.906624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:19.993985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:19.994067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:56.853 [2024-10-07 12:32:19.994095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.455 ms 00:21:56.853 [2024-10-07 12:32:19.994132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:20.005657] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:56.853 [2024-10-07 12:32:20.022987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:20.023099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:56.853 [2024-10-07 12:32:20.023118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.732 ms 00:21:56.853 [2024-10-07 12:32:20.023138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:20.023305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:20.023328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:56.853 [2024-10-07 12:32:20.023342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:56.853 [2024-10-07 12:32:20.023358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:20.023421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:20.023437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:56.853 [2024-10-07 12:32:20.023451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:56.853 [2024-10-07 12:32:20.023467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:20.023497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:20.023513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:56.853 [2024-10-07 12:32:20.023533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:56.853 [2024-10-07 12:32:20.023549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:20.023590] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:56.853 [2024-10-07 12:32:20.023614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:20.023626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:56.853 [2024-10-07 12:32:20.023642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:56.853 [2024-10-07 12:32:20.023654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:20.061390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:20.061453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:56.853 [2024-10-07 12:32:20.061496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.757 ms 00:21:56.853 [2024-10-07 12:32:20.061517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:20.061667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.853 [2024-10-07 12:32:20.061683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:56.853 [2024-10-07 12:32:20.061703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:56.853 [2024-10-07 12:32:20.061716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.853 [2024-10-07 12:32:20.062826] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:56.853 [2024-10-07 12:32:20.067570] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.492 ms, result 0 00:21:56.853 [2024-10-07 12:32:20.068920] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:56.853 Some configs were skipped because the RPC state that can call them passed over. 00:21:56.853 12:32:20 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:57.112 [2024-10-07 12:32:20.333012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.112 [2024-10-07 12:32:20.333101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:57.112 [2024-10-07 12:32:20.333120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.733 ms 00:21:57.112 [2024-10-07 12:32:20.333136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.112 [2024-10-07 12:32:20.333180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.909 ms, result 0 00:21:57.112 true 00:21:57.112 12:32:20 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:57.372 [2024-10-07 12:32:20.556627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.372 [2024-10-07 12:32:20.556889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:57.372 [2024-10-07 12:32:20.557046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:21:57.372 [2024-10-07 12:32:20.557096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.372 [2024-10-07 12:32:20.557195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.993 ms, result 0 00:21:57.372 true 00:21:57.372 12:32:20 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76185 00:21:57.372 12:32:20 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76185 ']' 00:21:57.372 12:32:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76185 00:21:57.372 12:32:20 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:21:57.372 12:32:20 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.372 12:32:20 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76185 00:21:57.372 12:32:20 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:57.372 killing process with pid 76185 00:21:57.372 12:32:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:57.372 12:32:20 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76185' 00:21:57.372 12:32:20 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76185 00:21:57.372 12:32:20 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76185 00:21:58.775 [2024-10-07 12:32:21.699461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.699534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:58.775 [2024-10-07 12:32:21.699568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:58.775 [2024-10-07 12:32:21.699583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.699610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:58.775 [2024-10-07 12:32:21.703811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.703856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:58.775 [2024-10-07 12:32:21.703878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.179 ms 00:21:58.775 [2024-10-07 12:32:21.703891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.704190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.704206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:58.775 [2024-10-07 12:32:21.704225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:21:58.775 [2024-10-07 12:32:21.704237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.707556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.707608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:58.775 [2024-10-07 12:32:21.707628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:21:58.775 [2024-10-07 12:32:21.707641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.713251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.713290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:58.775 [2024-10-07 12:32:21.713310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.570 ms 00:21:58.775 [2024-10-07 12:32:21.713325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.727360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.727396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:58.775 [2024-10-07 12:32:21.727416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.988 ms 00:21:58.775 [2024-10-07 12:32:21.727427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.737600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.737640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:58.775 [2024-10-07 12:32:21.737659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.121 ms 00:21:58.775 [2024-10-07 12:32:21.737683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.737834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.737849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:58.775 [2024-10-07 12:32:21.737865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:58.775 [2024-10-07 12:32:21.737879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.752642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.752678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:58.775 [2024-10-07 12:32:21.752699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.719 ms 00:21:58.775 [2024-10-07 12:32:21.752712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.767918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.767954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:58.775 [2024-10-07 12:32:21.768000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.156 ms 00:21:58.775 [2024-10-07 12:32:21.768012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.782390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.782425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:58.775 [2024-10-07 12:32:21.782461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.333 ms 00:21:58.775 [2024-10-07 12:32:21.782473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.775 [2024-10-07 12:32:21.796475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.775 [2024-10-07 12:32:21.796510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:58.776 [2024-10-07 12:32:21.796532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.928 ms 00:21:58.776 [2024-10-07 12:32:21.796543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.776 [2024-10-07 12:32:21.796626] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:58.776 [2024-10-07 12:32:21.796646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.796997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.797989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:58.776 [2024-10-07 12:32:21.798002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:58.777 [2024-10-07 12:32:21.798301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:58.777 [2024-10-07 12:32:21.798323] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8095ccf6-8e51-4990-b00e-376620ab48a2 00:21:58.777 [2024-10-07 12:32:21.798336] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:58.777 [2024-10-07 12:32:21.798354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:58.777 [2024-10-07 12:32:21.798365] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:58.777 [2024-10-07 12:32:21.798384] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:58.777 [2024-10-07 12:32:21.798409] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:58.777 [2024-10-07 12:32:21.798427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:58.777 [2024-10-07 12:32:21.798445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:58.777 [2024-10-07 12:32:21.798462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:58.777 [2024-10-07 12:32:21.798473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:58.777 [2024-10-07 12:32:21.798491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.777 [2024-10-07 12:32:21.798503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:58.777 [2024-10-07 12:32:21.798523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:21:58.777 [2024-10-07 12:32:21.798536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.777 [2024-10-07 12:32:21.818347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.777 [2024-10-07 12:32:21.818381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:58.777 [2024-10-07 12:32:21.818422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.793 ms 00:21:58.777 [2024-10-07 12:32:21.818434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.777 [2024-10-07 12:32:21.819055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.777 [2024-10-07 12:32:21.819076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:58.777 [2024-10-07 12:32:21.819095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:21:58.777 [2024-10-07 12:32:21.819107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.777 [2024-10-07 12:32:21.878468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.777 [2024-10-07 12:32:21.878506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.777 [2024-10-07 12:32:21.878523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.777 [2024-10-07 12:32:21.878538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.777 [2024-10-07 12:32:21.878624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.777 [2024-10-07 12:32:21.878637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.777 [2024-10-07 12:32:21.878652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.777 [2024-10-07 12:32:21.878664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.777 [2024-10-07 12:32:21.878721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.777 [2024-10-07 12:32:21.878735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.777 [2024-10-07 12:32:21.878761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.777 [2024-10-07 12:32:21.878772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.777 [2024-10-07 12:32:21.878806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.777 [2024-10-07 12:32:21.878818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.777 [2024-10-07 12:32:21.878835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.777 [2024-10-07 12:32:21.878846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.777 [2024-10-07 12:32:21.996645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.777 [2024-10-07 12:32:21.996700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.777 [2024-10-07 12:32:21.996722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.777 [2024-10-07 12:32:21.996740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.048 [2024-10-07 12:32:22.092110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.048 [2024-10-07 12:32:22.092162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:59.048 [2024-10-07 12:32:22.092202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.048 [2024-10-07 12:32:22.092216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.048 [2024-10-07 12:32:22.092311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.048 [2024-10-07 12:32:22.092326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:59.048 [2024-10-07 12:32:22.092350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.048 [2024-10-07 12:32:22.092364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.048 [2024-10-07 12:32:22.092403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.048 [2024-10-07 12:32:22.092423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:59.048 [2024-10-07 12:32:22.092441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.048 [2024-10-07 12:32:22.092454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.048 [2024-10-07 12:32:22.092577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.048 [2024-10-07 12:32:22.092593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:59.048 [2024-10-07 12:32:22.092612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.048 [2024-10-07 12:32:22.092625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.048 [2024-10-07 12:32:22.092678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.048 [2024-10-07 12:32:22.092693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:59.048 [2024-10-07 12:32:22.092720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.048 [2024-10-07 12:32:22.092732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.048 [2024-10-07 12:32:22.092782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.048 [2024-10-07 12:32:22.092795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:59.048 [2024-10-07 12:32:22.092818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.048 [2024-10-07 12:32:22.092830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.048 [2024-10-07 12:32:22.092883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.048 [2024-10-07 12:32:22.092924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:59.048 [2024-10-07 12:32:22.092944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.048 [2024-10-07 12:32:22.092956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.048 [2024-10-07 12:32:22.093116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 394.267 ms, result 0 00:21:59.987 12:32:23 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:59.987 12:32:23 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:00.247 [2024-10-07 12:32:23.353056] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:22:00.247 [2024-10-07 12:32:23.353198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76254 ] 00:22:00.247 [2024-10-07 12:32:23.532345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.507 [2024-10-07 12:32:23.751682] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.077 [2024-10-07 12:32:24.107674] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:01.077 [2024-10-07 12:32:24.107750] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:01.077 [2024-10-07 12:32:24.271934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.077 [2024-10-07 12:32:24.271999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:01.077 [2024-10-07 12:32:24.272022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:01.077 [2024-10-07 12:32:24.272034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.077 [2024-10-07 12:32:24.275192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.077 [2024-10-07 12:32:24.275235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.077 [2024-10-07 12:32:24.275250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.137 ms 00:22:01.077 [2024-10-07 12:32:24.275266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.077 [2024-10-07 12:32:24.275388] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:01.077 [2024-10-07 12:32:24.276358] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:01.077 [2024-10-07 12:32:24.276392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.077 [2024-10-07 12:32:24.276410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.077 [2024-10-07 12:32:24.276423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:22:01.077 [2024-10-07 12:32:24.276436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.077 [2024-10-07 12:32:24.278010] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:01.077 [2024-10-07 12:32:24.297546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.077 [2024-10-07 12:32:24.297593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:01.077 [2024-10-07 12:32:24.297611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.569 ms 00:22:01.077 [2024-10-07 12:32:24.297624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.077 [2024-10-07 12:32:24.297752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.077 [2024-10-07 12:32:24.297770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:01.077 [2024-10-07 12:32:24.297788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:01.077 [2024-10-07 12:32:24.297800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.077 [2024-10-07 12:32:24.304954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.077 [2024-10-07 12:32:24.304981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.077 [2024-10-07 12:32:24.304996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.113 ms 00:22:01.077 [2024-10-07 12:32:24.305008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.077 [2024-10-07 12:32:24.305122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.077 [2024-10-07 12:32:24.305143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.077 [2024-10-07 12:32:24.305157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:01.077 [2024-10-07 12:32:24.305169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.077 [2024-10-07 12:32:24.305204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.077 [2024-10-07 12:32:24.305217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:01.077 [2024-10-07 12:32:24.305230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:01.077 [2024-10-07 12:32:24.305242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.077 [2024-10-07 12:32:24.305272] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:01.077 [2024-10-07 12:32:24.310039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.078 [2024-10-07 12:32:24.310074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.078 [2024-10-07 12:32:24.310088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.783 ms 00:22:01.078 [2024-10-07 12:32:24.310100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.078 [2024-10-07 12:32:24.310180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.078 [2024-10-07 12:32:24.310200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:01.078 [2024-10-07 12:32:24.310214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:01.078 [2024-10-07 12:32:24.310226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.078 [2024-10-07 12:32:24.310255] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:01.078 [2024-10-07 12:32:24.310281] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:01.078 [2024-10-07 12:32:24.310320] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:01.078 [2024-10-07 12:32:24.310341] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:01.078 [2024-10-07 12:32:24.310440] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:01.078 [2024-10-07 12:32:24.310456] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:01.078 [2024-10-07 12:32:24.310471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:01.078 [2024-10-07 12:32:24.310487] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:01.078 [2024-10-07 12:32:24.310502] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:01.078 [2024-10-07 12:32:24.310516] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:01.078 [2024-10-07 12:32:24.310527] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:01.078 [2024-10-07 12:32:24.310539] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:01.078 [2024-10-07 12:32:24.310551] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:01.078 [2024-10-07 12:32:24.310563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.078 [2024-10-07 12:32:24.310579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:01.078 [2024-10-07 12:32:24.310592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:22:01.078 [2024-10-07 12:32:24.310605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.078 [2024-10-07 12:32:24.310686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.078 [2024-10-07 12:32:24.310699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:01.078 [2024-10-07 12:32:24.310712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:01.078 [2024-10-07 12:32:24.310723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.078 [2024-10-07 12:32:24.310820] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:01.078 [2024-10-07 12:32:24.310835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:01.078 [2024-10-07 12:32:24.310852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:01.078 [2024-10-07 12:32:24.310865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.078 [2024-10-07 12:32:24.310878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:01.078 [2024-10-07 12:32:24.310889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:01.078 [2024-10-07 12:32:24.310917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:01.078 [2024-10-07 12:32:24.310930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:01.078 [2024-10-07 12:32:24.310942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:01.078 [2024-10-07 12:32:24.310953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:01.078 [2024-10-07 12:32:24.310964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:01.078 [2024-10-07 12:32:24.310999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:01.078 [2024-10-07 12:32:24.311011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:01.078 [2024-10-07 12:32:24.311022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:01.078 [2024-10-07 12:32:24.311034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:01.078 [2024-10-07 12:32:24.311046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.078 [2024-10-07 12:32:24.311057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:01.078 [2024-10-07 12:32:24.311069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:01.078 [2024-10-07 12:32:24.311080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.078 [2024-10-07 12:32:24.311091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:01.078 [2024-10-07 12:32:24.311102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:01.078 [2024-10-07 12:32:24.311113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.078 [2024-10-07 12:32:24.311124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:01.078 [2024-10-07 12:32:24.311135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:01.078 [2024-10-07 12:32:24.311146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.078 [2024-10-07 12:32:24.311157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:01.078 [2024-10-07 12:32:24.311168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:01.078 [2024-10-07 12:32:24.311179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.078 [2024-10-07 12:32:24.311190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:01.078 [2024-10-07 12:32:24.311201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:01.078 [2024-10-07 12:32:24.311211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.078 [2024-10-07 12:32:24.311222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:01.078 [2024-10-07 12:32:24.311233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:01.078 [2024-10-07 12:32:24.311244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:01.078 [2024-10-07 12:32:24.311255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:01.078 [2024-10-07 12:32:24.311266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:01.078 [2024-10-07 12:32:24.311277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:01.078 [2024-10-07 12:32:24.311288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:01.078 [2024-10-07 12:32:24.311299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:01.078 [2024-10-07 12:32:24.311309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.078 [2024-10-07 12:32:24.311320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:01.078 [2024-10-07 12:32:24.311331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:01.078 [2024-10-07 12:32:24.311343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.078 [2024-10-07 12:32:24.311355] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:01.078 [2024-10-07 12:32:24.311367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:01.078 [2024-10-07 12:32:24.311379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:01.078 [2024-10-07 12:32:24.311390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.078 [2024-10-07 12:32:24.311402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:01.078 [2024-10-07 12:32:24.311414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:01.078 [2024-10-07 12:32:24.311425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:01.078 [2024-10-07 12:32:24.311436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:01.078 [2024-10-07 12:32:24.311447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:01.078 [2024-10-07 12:32:24.311458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:01.078 [2024-10-07 12:32:24.311471] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:01.078 [2024-10-07 12:32:24.311491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:01.078 [2024-10-07 12:32:24.311504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:01.078 [2024-10-07 12:32:24.311516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:01.078 [2024-10-07 12:32:24.311528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:01.078 [2024-10-07 12:32:24.311540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:01.078 [2024-10-07 12:32:24.311553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:01.078 [2024-10-07 12:32:24.311565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:01.078 [2024-10-07 12:32:24.311577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:01.078 [2024-10-07 12:32:24.311589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:01.078 [2024-10-07 12:32:24.311601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:01.078 [2024-10-07 12:32:24.311613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:01.078 [2024-10-07 12:32:24.311626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:01.078 [2024-10-07 12:32:24.311638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:01.078 [2024-10-07 12:32:24.311650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:01.078 [2024-10-07 12:32:24.311663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:01.078 [2024-10-07 12:32:24.311675] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:01.078 [2024-10-07 12:32:24.311688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:01.078 [2024-10-07 12:32:24.311702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:01.078 [2024-10-07 12:32:24.311714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:01.078 [2024-10-07 12:32:24.311726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:01.078 [2024-10-07 12:32:24.311738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:01.078 [2024-10-07 12:32:24.311752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.078 [2024-10-07 12:32:24.311769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:01.079 [2024-10-07 12:32:24.311781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:22:01.079 [2024-10-07 12:32:24.311792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.079 [2024-10-07 12:32:24.363417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.079 [2024-10-07 12:32:24.363480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.079 [2024-10-07 12:32:24.363499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.641 ms 00:22:01.079 [2024-10-07 12:32:24.363511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.079 [2024-10-07 12:32:24.363712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.079 [2024-10-07 12:32:24.363727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:01.079 [2024-10-07 12:32:24.363741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:01.079 [2024-10-07 12:32:24.363753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.338 [2024-10-07 12:32:24.410657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.338 [2024-10-07 12:32:24.410733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:01.338 [2024-10-07 12:32:24.410751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.948 ms 00:22:01.338 [2024-10-07 12:32:24.410765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.338 [2024-10-07 12:32:24.410886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.338 [2024-10-07 12:32:24.410911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:01.338 [2024-10-07 12:32:24.410925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:01.338 [2024-10-07 12:32:24.410937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.338 [2024-10-07 12:32:24.411418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.338 [2024-10-07 12:32:24.411439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:01.338 [2024-10-07 12:32:24.411452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:22:01.338 [2024-10-07 12:32:24.411464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.338 [2024-10-07 12:32:24.411601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.338 [2024-10-07 12:32:24.411617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:01.338 [2024-10-07 12:32:24.411630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:22:01.338 [2024-10-07 12:32:24.411652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.338 [2024-10-07 12:32:24.430580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.338 [2024-10-07 12:32:24.430633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:01.338 [2024-10-07 12:32:24.430651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.929 ms 00:22:01.338 [2024-10-07 12:32:24.430663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.338 [2024-10-07 12:32:24.449956] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:01.338 [2024-10-07 12:32:24.450009] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:01.338 [2024-10-07 12:32:24.450029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.338 [2024-10-07 12:32:24.450042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:01.338 [2024-10-07 12:32:24.450056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.227 ms 00:22:01.338 [2024-10-07 12:32:24.450069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.338 [2024-10-07 12:32:24.480388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.338 [2024-10-07 12:32:24.480464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:01.338 [2024-10-07 12:32:24.480497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.252 ms 00:22:01.338 [2024-10-07 12:32:24.480510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.338 [2024-10-07 12:32:24.500353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.338 [2024-10-07 12:32:24.500422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:01.338 [2024-10-07 12:32:24.500440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.688 ms 00:22:01.339 [2024-10-07 12:32:24.500452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.339 [2024-10-07 12:32:24.518906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.339 [2024-10-07 12:32:24.518991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:01.339 [2024-10-07 12:32:24.519011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.341 ms 00:22:01.339 [2024-10-07 12:32:24.519023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.339 [2024-10-07 12:32:24.519802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.339 [2024-10-07 12:32:24.519832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:01.339 [2024-10-07 12:32:24.519847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:22:01.339 [2024-10-07 12:32:24.519860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.339 [2024-10-07 12:32:24.608486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.339 [2024-10-07 12:32:24.608569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:01.339 [2024-10-07 12:32:24.608589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.731 ms 00:22:01.339 [2024-10-07 12:32:24.608602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.339 [2024-10-07 12:32:24.622449] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:01.598 [2024-10-07 12:32:24.640149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.598 [2024-10-07 12:32:24.640219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:01.598 [2024-10-07 12:32:24.640239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.382 ms 00:22:01.598 [2024-10-07 12:32:24.640252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.598 [2024-10-07 12:32:24.640394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.598 [2024-10-07 12:32:24.640411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:01.598 [2024-10-07 12:32:24.640424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:01.598 [2024-10-07 12:32:24.640437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.598 [2024-10-07 12:32:24.640504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.598 [2024-10-07 12:32:24.640523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:01.598 [2024-10-07 12:32:24.640537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:01.598 [2024-10-07 12:32:24.640548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.598 [2024-10-07 12:32:24.640579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.598 [2024-10-07 12:32:24.640592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:01.598 [2024-10-07 12:32:24.640605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:01.598 [2024-10-07 12:32:24.640617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.598 [2024-10-07 12:32:24.640656] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:01.598 [2024-10-07 12:32:24.640671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.598 [2024-10-07 12:32:24.640687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:01.598 [2024-10-07 12:32:24.640699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:01.598 [2024-10-07 12:32:24.640711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.598 [2024-10-07 12:32:24.678243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.598 [2024-10-07 12:32:24.678309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:01.598 [2024-10-07 12:32:24.678328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.561 ms 00:22:01.598 [2024-10-07 12:32:24.678341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.598 [2024-10-07 12:32:24.678519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.599 [2024-10-07 12:32:24.678536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:01.599 [2024-10-07 12:32:24.678549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:01.599 [2024-10-07 12:32:24.678561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.599 [2024-10-07 12:32:24.679564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:01.599 [2024-10-07 12:32:24.684316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.981 ms, result 0 00:22:01.599 [2024-10-07 12:32:24.685284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:01.599 [2024-10-07 12:32:24.704079] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.535  [2024-10-07T12:32:26.763Z] Copying: 25/256 [MB] (25 MBps) [2024-10-07T12:32:28.141Z] Copying: 47/256 [MB] (22 MBps) [2024-10-07T12:32:28.708Z] Copying: 69/256 [MB] (21 MBps) [2024-10-07T12:32:30.088Z] Copying: 91/256 [MB] (21 MBps) [2024-10-07T12:32:31.063Z] Copying: 113/256 [MB] (22 MBps) [2024-10-07T12:32:32.000Z] Copying: 135/256 [MB] (21 MBps) [2024-10-07T12:32:32.938Z] Copying: 156/256 [MB] (21 MBps) [2024-10-07T12:32:33.875Z] Copying: 178/256 [MB] (21 MBps) [2024-10-07T12:32:34.811Z] Copying: 199/256 [MB] (21 MBps) [2024-10-07T12:32:35.749Z] Copying: 221/256 [MB] (21 MBps) [2024-10-07T12:32:36.317Z] Copying: 243/256 [MB] (21 MBps) [2024-10-07T12:32:36.317Z] Copying: 256/256 [MB] (average 22 MBps)[2024-10-07 12:32:36.284539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:13.026 [2024-10-07 12:32:36.299651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.026 [2024-10-07 12:32:36.299705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:13.026 [2024-10-07 12:32:36.299724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:13.026 [2024-10-07 12:32:36.299737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.026 [2024-10-07 12:32:36.299768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:13.026 [2024-10-07 12:32:36.304109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.026 [2024-10-07 12:32:36.304144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:13.026 [2024-10-07 12:32:36.304160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.328 ms 00:22:13.026 [2024-10-07 12:32:36.304172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.026 [2024-10-07 12:32:36.304423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.026 [2024-10-07 12:32:36.304449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:13.026 [2024-10-07 12:32:36.304462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:22:13.026 [2024-10-07 12:32:36.304474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.026 [2024-10-07 12:32:36.307348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.026 [2024-10-07 12:32:36.307372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:13.026 [2024-10-07 12:32:36.307386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.859 ms 00:22:13.026 [2024-10-07 12:32:36.307398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.026 [2024-10-07 12:32:36.313061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.026 [2024-10-07 12:32:36.313099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:13.026 [2024-10-07 12:32:36.313128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.646 ms 00:22:13.026 [2024-10-07 12:32:36.313140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.287 [2024-10-07 12:32:36.351394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.287 [2024-10-07 12:32:36.351458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:13.287 [2024-10-07 12:32:36.351476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.218 ms 00:22:13.287 [2024-10-07 12:32:36.351488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.287 [2024-10-07 12:32:36.373374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.287 [2024-10-07 12:32:36.373436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:13.287 [2024-10-07 12:32:36.373454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.825 ms 00:22:13.287 [2024-10-07 12:32:36.373467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.287 [2024-10-07 12:32:36.373663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.287 [2024-10-07 12:32:36.373680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:13.287 [2024-10-07 12:32:36.373694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:13.287 [2024-10-07 12:32:36.373707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.287 [2024-10-07 12:32:36.412637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.287 [2024-10-07 12:32:36.412703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:13.287 [2024-10-07 12:32:36.412722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.950 ms 00:22:13.287 [2024-10-07 12:32:36.412734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.287 [2024-10-07 12:32:36.450508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.287 [2024-10-07 12:32:36.450569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:13.287 [2024-10-07 12:32:36.450587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.711 ms 00:22:13.287 [2024-10-07 12:32:36.450599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.287 [2024-10-07 12:32:36.487936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.287 [2024-10-07 12:32:36.488002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:13.287 [2024-10-07 12:32:36.488020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.308 ms 00:22:13.287 [2024-10-07 12:32:36.488032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.287 [2024-10-07 12:32:36.525426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.287 [2024-10-07 12:32:36.525495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:13.287 [2024-10-07 12:32:36.525513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.318 ms 00:22:13.287 [2024-10-07 12:32:36.525525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.287 [2024-10-07 12:32:36.525650] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:13.287 [2024-10-07 12:32:36.525671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.525997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:13.287 [2024-10-07 12:32:36.526258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:13.288 [2024-10-07 12:32:36.526969] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:13.288 [2024-10-07 12:32:36.526990] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8095ccf6-8e51-4990-b00e-376620ab48a2 00:22:13.288 [2024-10-07 12:32:36.527002] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:13.288 [2024-10-07 12:32:36.527014] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:13.288 [2024-10-07 12:32:36.527025] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:13.288 [2024-10-07 12:32:36.527046] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:13.288 [2024-10-07 12:32:36.527058] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:13.288 [2024-10-07 12:32:36.527070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:13.288 [2024-10-07 12:32:36.527081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:13.288 [2024-10-07 12:32:36.527092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:13.288 [2024-10-07 12:32:36.527103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:13.288 [2024-10-07 12:32:36.527114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.288 [2024-10-07 12:32:36.527126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:13.288 [2024-10-07 12:32:36.527139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:22:13.288 [2024-10-07 12:32:36.527150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.288 [2024-10-07 12:32:36.547211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.288 [2024-10-07 12:32:36.547285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:13.288 [2024-10-07 12:32:36.547302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.063 ms 00:22:13.288 [2024-10-07 12:32:36.547315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.288 [2024-10-07 12:32:36.547947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.288 [2024-10-07 12:32:36.547969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:13.288 [2024-10-07 12:32:36.547983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:22:13.288 [2024-10-07 12:32:36.547995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.597923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.597989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.548 [2024-10-07 12:32:36.598007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.598019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.598134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.598148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.548 [2024-10-07 12:32:36.598162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.598174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.598237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.598261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.548 [2024-10-07 12:32:36.598274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.598286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.598309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.598322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.548 [2024-10-07 12:32:36.598334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.598345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.723648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.723734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.548 [2024-10-07 12:32:36.723752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.723766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.829660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.829731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.548 [2024-10-07 12:32:36.829749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.829762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.829890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.829928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.548 [2024-10-07 12:32:36.829951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.829963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.829998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.830010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.548 [2024-10-07 12:32:36.830023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.830035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.830173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.830189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.548 [2024-10-07 12:32:36.830201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.830223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.830269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.830284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:13.548 [2024-10-07 12:32:36.830296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.830308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.830350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.830364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.548 [2024-10-07 12:32:36.830377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.830398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.830445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.548 [2024-10-07 12:32:36.830459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.548 [2024-10-07 12:32:36.830471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.548 [2024-10-07 12:32:36.830483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.548 [2024-10-07 12:32:36.830658] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.876 ms, result 0 00:22:14.927 00:22:14.927 00:22:14.927 12:32:38 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:14.927 12:32:38 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:15.493 12:32:38 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:15.493 [2024-10-07 12:32:38.583945] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:22:15.493 [2024-10-07 12:32:38.584069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76414 ] 00:22:15.493 [2024-10-07 12:32:38.757425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.752 [2024-10-07 12:32:38.963813] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.011 [2024-10-07 12:32:39.293304] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.011 [2024-10-07 12:32:39.293379] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.271 [2024-10-07 12:32:39.457176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.457242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:16.271 [2024-10-07 12:32:39.457265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:16.271 [2024-10-07 12:32:39.457278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.460464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.460509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:16.271 [2024-10-07 12:32:39.460525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:22:16.271 [2024-10-07 12:32:39.460542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.460657] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:16.271 [2024-10-07 12:32:39.461638] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:16.271 [2024-10-07 12:32:39.461673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.461690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:16.271 [2024-10-07 12:32:39.461703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:22:16.271 [2024-10-07 12:32:39.461715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.463302] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:16.271 [2024-10-07 12:32:39.482643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.482689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:16.271 [2024-10-07 12:32:39.482706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.372 ms 00:22:16.271 [2024-10-07 12:32:39.482719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.482837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.482855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:16.271 [2024-10-07 12:32:39.482873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:16.271 [2024-10-07 12:32:39.482885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.490027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.490064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:16.271 [2024-10-07 12:32:39.490079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.082 ms 00:22:16.271 [2024-10-07 12:32:39.490104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.490227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.490245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:16.271 [2024-10-07 12:32:39.490259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:16.271 [2024-10-07 12:32:39.490271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.490304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.490317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:16.271 [2024-10-07 12:32:39.490329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:16.271 [2024-10-07 12:32:39.490341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.490371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:16.271 [2024-10-07 12:32:39.495306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.495341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:16.271 [2024-10-07 12:32:39.495355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.951 ms 00:22:16.271 [2024-10-07 12:32:39.495368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.495456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.495475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:16.271 [2024-10-07 12:32:39.495489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:16.271 [2024-10-07 12:32:39.495501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.495528] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:16.271 [2024-10-07 12:32:39.495553] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:16.271 [2024-10-07 12:32:39.495591] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:16.271 [2024-10-07 12:32:39.495612] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:16.271 [2024-10-07 12:32:39.495709] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:16.271 [2024-10-07 12:32:39.495725] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:16.271 [2024-10-07 12:32:39.495740] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:16.271 [2024-10-07 12:32:39.495756] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:16.271 [2024-10-07 12:32:39.495769] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:16.271 [2024-10-07 12:32:39.495783] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:16.271 [2024-10-07 12:32:39.495795] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:16.271 [2024-10-07 12:32:39.495806] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:16.271 [2024-10-07 12:32:39.495818] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:16.271 [2024-10-07 12:32:39.495830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.495846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:16.271 [2024-10-07 12:32:39.495858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:22:16.271 [2024-10-07 12:32:39.495870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.495963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.271 [2024-10-07 12:32:39.495977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:16.271 [2024-10-07 12:32:39.495990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:16.271 [2024-10-07 12:32:39.496003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.271 [2024-10-07 12:32:39.496097] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:16.271 [2024-10-07 12:32:39.496113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:16.271 [2024-10-07 12:32:39.496130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.271 [2024-10-07 12:32:39.496143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.271 [2024-10-07 12:32:39.496155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:16.271 [2024-10-07 12:32:39.496166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:16.271 [2024-10-07 12:32:39.496177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:16.271 [2024-10-07 12:32:39.496189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:16.271 [2024-10-07 12:32:39.496201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:16.271 [2024-10-07 12:32:39.496213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.271 [2024-10-07 12:32:39.496224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:16.271 [2024-10-07 12:32:39.496248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:16.271 [2024-10-07 12:32:39.496260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.271 [2024-10-07 12:32:39.496272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:16.271 [2024-10-07 12:32:39.496283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:16.271 [2024-10-07 12:32:39.496295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.271 [2024-10-07 12:32:39.496306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:16.272 [2024-10-07 12:32:39.496317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:16.272 [2024-10-07 12:32:39.496328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.272 [2024-10-07 12:32:39.496340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:16.272 [2024-10-07 12:32:39.496352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:16.272 [2024-10-07 12:32:39.496363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.272 [2024-10-07 12:32:39.496374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:16.272 [2024-10-07 12:32:39.496385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:16.272 [2024-10-07 12:32:39.496395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.272 [2024-10-07 12:32:39.496406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:16.272 [2024-10-07 12:32:39.496417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:16.272 [2024-10-07 12:32:39.496428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.272 [2024-10-07 12:32:39.496439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:16.272 [2024-10-07 12:32:39.496450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:16.272 [2024-10-07 12:32:39.496461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.272 [2024-10-07 12:32:39.496472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:16.272 [2024-10-07 12:32:39.496482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:16.272 [2024-10-07 12:32:39.496493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.272 [2024-10-07 12:32:39.496504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:16.272 [2024-10-07 12:32:39.496515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:16.272 [2024-10-07 12:32:39.496526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.272 [2024-10-07 12:32:39.496537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:16.272 [2024-10-07 12:32:39.496548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:16.272 [2024-10-07 12:32:39.496558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.272 [2024-10-07 12:32:39.496569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:16.272 [2024-10-07 12:32:39.496580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:16.272 [2024-10-07 12:32:39.496592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.272 [2024-10-07 12:32:39.496602] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:16.272 [2024-10-07 12:32:39.496618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:16.272 [2024-10-07 12:32:39.496629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.272 [2024-10-07 12:32:39.496641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.272 [2024-10-07 12:32:39.496653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:16.272 [2024-10-07 12:32:39.496664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:16.272 [2024-10-07 12:32:39.496675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:16.272 [2024-10-07 12:32:39.496686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:16.272 [2024-10-07 12:32:39.496697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:16.272 [2024-10-07 12:32:39.496708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:16.272 [2024-10-07 12:32:39.496721] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:16.272 [2024-10-07 12:32:39.496740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.272 [2024-10-07 12:32:39.496754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:16.272 [2024-10-07 12:32:39.496767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:16.272 [2024-10-07 12:32:39.496779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:16.272 [2024-10-07 12:32:39.496791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:16.272 [2024-10-07 12:32:39.496804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:16.272 [2024-10-07 12:32:39.496816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:16.272 [2024-10-07 12:32:39.496829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:16.272 [2024-10-07 12:32:39.496841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:16.272 [2024-10-07 12:32:39.496853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:16.272 [2024-10-07 12:32:39.496865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:16.272 [2024-10-07 12:32:39.496877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:16.272 [2024-10-07 12:32:39.496889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:16.272 [2024-10-07 12:32:39.496911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:16.272 [2024-10-07 12:32:39.496925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:16.272 [2024-10-07 12:32:39.496937] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:16.272 [2024-10-07 12:32:39.496950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.272 [2024-10-07 12:32:39.496965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:16.272 [2024-10-07 12:32:39.496978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:16.272 [2024-10-07 12:32:39.496990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:16.272 [2024-10-07 12:32:39.497002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:16.272 [2024-10-07 12:32:39.497015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.272 [2024-10-07 12:32:39.497032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:16.272 [2024-10-07 12:32:39.497045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:22:16.272 [2024-10-07 12:32:39.497057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.272 [2024-10-07 12:32:39.545405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.272 [2024-10-07 12:32:39.545470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:16.272 [2024-10-07 12:32:39.545489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.362 ms 00:22:16.272 [2024-10-07 12:32:39.545501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.272 [2024-10-07 12:32:39.545672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.272 [2024-10-07 12:32:39.545687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:16.272 [2024-10-07 12:32:39.545701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:16.272 [2024-10-07 12:32:39.545713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.592235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.592295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:16.533 [2024-10-07 12:32:39.592313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.567 ms 00:22:16.533 [2024-10-07 12:32:39.592326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.592477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.592492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:16.533 [2024-10-07 12:32:39.592506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:16.533 [2024-10-07 12:32:39.592518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.592999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.593015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:16.533 [2024-10-07 12:32:39.593028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:22:16.533 [2024-10-07 12:32:39.593040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.593172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.593195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:16.533 [2024-10-07 12:32:39.593208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:16.533 [2024-10-07 12:32:39.593219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.612430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.612481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:16.533 [2024-10-07 12:32:39.612498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.212 ms 00:22:16.533 [2024-10-07 12:32:39.612511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.631740] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:16.533 [2024-10-07 12:32:39.631796] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:16.533 [2024-10-07 12:32:39.631816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.631830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:16.533 [2024-10-07 12:32:39.631845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.161 ms 00:22:16.533 [2024-10-07 12:32:39.631857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.662092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.662170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:16.533 [2024-10-07 12:32:39.662203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.158 ms 00:22:16.533 [2024-10-07 12:32:39.662217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.681764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.681832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:16.533 [2024-10-07 12:32:39.681849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.450 ms 00:22:16.533 [2024-10-07 12:32:39.681863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.701130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.701210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:16.533 [2024-10-07 12:32:39.701229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.146 ms 00:22:16.533 [2024-10-07 12:32:39.701241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.702154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.702193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:16.533 [2024-10-07 12:32:39.702207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:22:16.533 [2024-10-07 12:32:39.702220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.533 [2024-10-07 12:32:39.791150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.533 [2024-10-07 12:32:39.791218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:16.533 [2024-10-07 12:32:39.791237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.036 ms 00:22:16.534 [2024-10-07 12:32:39.791251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.534 [2024-10-07 12:32:39.804202] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:16.534 [2024-10-07 12:32:39.821438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.534 [2024-10-07 12:32:39.821499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:16.534 [2024-10-07 12:32:39.821518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.067 ms 00:22:16.534 [2024-10-07 12:32:39.821531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.534 [2024-10-07 12:32:39.821694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.534 [2024-10-07 12:32:39.821710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:16.534 [2024-10-07 12:32:39.821724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:16.534 [2024-10-07 12:32:39.821737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.534 [2024-10-07 12:32:39.821803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.534 [2024-10-07 12:32:39.821821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:16.534 [2024-10-07 12:32:39.821834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:16.534 [2024-10-07 12:32:39.821846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.534 [2024-10-07 12:32:39.821878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.534 [2024-10-07 12:32:39.821890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:16.534 [2024-10-07 12:32:39.821919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:16.534 [2024-10-07 12:32:39.821931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.534 [2024-10-07 12:32:39.821976] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:16.534 [2024-10-07 12:32:39.821991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.534 [2024-10-07 12:32:39.822008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:16.534 [2024-10-07 12:32:39.822020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:16.534 [2024-10-07 12:32:39.822032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.799 [2024-10-07 12:32:39.860263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.799 [2024-10-07 12:32:39.860329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:16.799 [2024-10-07 12:32:39.860348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.265 ms 00:22:16.799 [2024-10-07 12:32:39.860361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.799 [2024-10-07 12:32:39.860549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.799 [2024-10-07 12:32:39.860565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:16.799 [2024-10-07 12:32:39.860579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:16.799 [2024-10-07 12:32:39.860591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.799 [2024-10-07 12:32:39.861590] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:16.799 [2024-10-07 12:32:39.866177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.735 ms, result 0 00:22:16.799 [2024-10-07 12:32:39.867202] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:16.799 [2024-10-07 12:32:39.886091] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:16.799  [2024-10-07T12:32:40.090Z] Copying: 4096/4096 [kB] (average 20 MBps)[2024-10-07 12:32:40.087675] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:17.058 [2024-10-07 12:32:40.102623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.058 [2024-10-07 12:32:40.102677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:17.058 [2024-10-07 12:32:40.102695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:17.058 [2024-10-07 12:32:40.102708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.058 [2024-10-07 12:32:40.102739] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:17.058 [2024-10-07 12:32:40.107183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.058 [2024-10-07 12:32:40.107215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:17.058 [2024-10-07 12:32:40.107229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.431 ms 00:22:17.058 [2024-10-07 12:32:40.107242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.058 [2024-10-07 12:32:40.109379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.058 [2024-10-07 12:32:40.109428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:17.058 [2024-10-07 12:32:40.109443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.109 ms 00:22:17.058 [2024-10-07 12:32:40.109455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.058 [2024-10-07 12:32:40.112769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.058 [2024-10-07 12:32:40.112804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:17.058 [2024-10-07 12:32:40.112818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.297 ms 00:22:17.058 [2024-10-07 12:32:40.112831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.058 [2024-10-07 12:32:40.118467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.058 [2024-10-07 12:32:40.118506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:17.058 [2024-10-07 12:32:40.118529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.605 ms 00:22:17.058 [2024-10-07 12:32:40.118540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.058 [2024-10-07 12:32:40.155966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.058 [2024-10-07 12:32:40.156026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:17.058 [2024-10-07 12:32:40.156044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.425 ms 00:22:17.058 [2024-10-07 12:32:40.156056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.058 [2024-10-07 12:32:40.177573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.058 [2024-10-07 12:32:40.177632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:17.058 [2024-10-07 12:32:40.177650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.469 ms 00:22:17.058 [2024-10-07 12:32:40.177664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.058 [2024-10-07 12:32:40.177802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.058 [2024-10-07 12:32:40.177818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:17.058 [2024-10-07 12:32:40.177831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:17.058 [2024-10-07 12:32:40.177844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.058 [2024-10-07 12:32:40.215372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.058 [2024-10-07 12:32:40.215438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:17.059 [2024-10-07 12:32:40.215456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.555 ms 00:22:17.059 [2024-10-07 12:32:40.215468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.059 [2024-10-07 12:32:40.253951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.059 [2024-10-07 12:32:40.254020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:17.059 [2024-10-07 12:32:40.254038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.446 ms 00:22:17.059 [2024-10-07 12:32:40.254050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.059 [2024-10-07 12:32:40.291761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.059 [2024-10-07 12:32:40.291835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:17.059 [2024-10-07 12:32:40.291853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.654 ms 00:22:17.059 [2024-10-07 12:32:40.291866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.059 [2024-10-07 12:32:40.330000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.059 [2024-10-07 12:32:40.330073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:17.059 [2024-10-07 12:32:40.330092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.034 ms 00:22:17.059 [2024-10-07 12:32:40.330104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.059 [2024-10-07 12:32:40.330219] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:17.059 [2024-10-07 12:32:40.330241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.330988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:17.059 [2024-10-07 12:32:40.331284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:17.060 [2024-10-07 12:32:40.331528] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:17.060 [2024-10-07 12:32:40.331540] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8095ccf6-8e51-4990-b00e-376620ab48a2 00:22:17.060 [2024-10-07 12:32:40.331553] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:17.060 [2024-10-07 12:32:40.331565] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:17.060 [2024-10-07 12:32:40.331582] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:17.060 [2024-10-07 12:32:40.331595] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:17.060 [2024-10-07 12:32:40.331606] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:17.060 [2024-10-07 12:32:40.331619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:17.060 [2024-10-07 12:32:40.331630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:17.060 [2024-10-07 12:32:40.331641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:17.060 [2024-10-07 12:32:40.331651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:17.060 [2024-10-07 12:32:40.331663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.060 [2024-10-07 12:32:40.331675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:17.060 [2024-10-07 12:32:40.331688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.448 ms 00:22:17.060 [2024-10-07 12:32:40.331700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.319 [2024-10-07 12:32:40.351930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.319 [2024-10-07 12:32:40.352001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:17.319 [2024-10-07 12:32:40.352018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.231 ms 00:22:17.319 [2024-10-07 12:32:40.352030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.319 [2024-10-07 12:32:40.352651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.319 [2024-10-07 12:32:40.352671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:17.319 [2024-10-07 12:32:40.352684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:22:17.319 [2024-10-07 12:32:40.352695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.319 [2024-10-07 12:32:40.401137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.319 [2024-10-07 12:32:40.401208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:17.319 [2024-10-07 12:32:40.401227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.319 [2024-10-07 12:32:40.401241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.319 [2024-10-07 12:32:40.401351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.319 [2024-10-07 12:32:40.401365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:17.319 [2024-10-07 12:32:40.401377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.319 [2024-10-07 12:32:40.401389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.319 [2024-10-07 12:32:40.401453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.319 [2024-10-07 12:32:40.401475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:17.319 [2024-10-07 12:32:40.401487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.319 [2024-10-07 12:32:40.401499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.319 [2024-10-07 12:32:40.401521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.319 [2024-10-07 12:32:40.401534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:17.319 [2024-10-07 12:32:40.401547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.319 [2024-10-07 12:32:40.401559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.319 [2024-10-07 12:32:40.527261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.319 [2024-10-07 12:32:40.527334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:17.319 [2024-10-07 12:32:40.527352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.319 [2024-10-07 12:32:40.527365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.577 [2024-10-07 12:32:40.630738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.577 [2024-10-07 12:32:40.630811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:17.577 [2024-10-07 12:32:40.630829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.577 [2024-10-07 12:32:40.630843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.577 [2024-10-07 12:32:40.630972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.577 [2024-10-07 12:32:40.630995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:17.577 [2024-10-07 12:32:40.631021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.577 [2024-10-07 12:32:40.631033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.577 [2024-10-07 12:32:40.631067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.577 [2024-10-07 12:32:40.631080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:17.577 [2024-10-07 12:32:40.631093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.577 [2024-10-07 12:32:40.631105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.577 [2024-10-07 12:32:40.631240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.577 [2024-10-07 12:32:40.631256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:17.577 [2024-10-07 12:32:40.631269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.577 [2024-10-07 12:32:40.631287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.577 [2024-10-07 12:32:40.631333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.578 [2024-10-07 12:32:40.631348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:17.578 [2024-10-07 12:32:40.631361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.578 [2024-10-07 12:32:40.631374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.578 [2024-10-07 12:32:40.631417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.578 [2024-10-07 12:32:40.631430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:17.578 [2024-10-07 12:32:40.631443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.578 [2024-10-07 12:32:40.631460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.578 [2024-10-07 12:32:40.631510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.578 [2024-10-07 12:32:40.631524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:17.578 [2024-10-07 12:32:40.631537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.578 [2024-10-07 12:32:40.631549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.578 [2024-10-07 12:32:40.631710] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.935 ms, result 0 00:22:18.512 00:22:18.513 00:22:18.771 12:32:41 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76451 00:22:18.771 12:32:41 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76451 00:22:18.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.771 12:32:41 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76451 ']' 00:22:18.771 12:32:41 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.771 12:32:41 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.771 12:32:41 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.771 12:32:41 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.771 12:32:41 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:18.771 12:32:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:18.771 [2024-10-07 12:32:41.936969] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:22:18.771 [2024-10-07 12:32:41.937113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76451 ] 00:22:19.029 [2024-10-07 12:32:42.107953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.287 [2024-10-07 12:32:42.325922] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.222 12:32:43 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.222 12:32:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:22:20.222 12:32:43 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:20.222 [2024-10-07 12:32:43.487085] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:20.222 [2024-10-07 12:32:43.487168] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:20.482 [2024-10-07 12:32:43.678956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.679032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:20.482 [2024-10-07 12:32:43.679057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:20.482 [2024-10-07 12:32:43.679077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.683135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.683185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:20.482 [2024-10-07 12:32:43.683207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.038 ms 00:22:20.482 [2024-10-07 12:32:43.683220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.683347] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:20.482 [2024-10-07 12:32:43.684398] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:20.482 [2024-10-07 12:32:43.684443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.684456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:20.482 [2024-10-07 12:32:43.684472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:22:20.482 [2024-10-07 12:32:43.684485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.686117] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:20.482 [2024-10-07 12:32:43.706449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.706514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:20.482 [2024-10-07 12:32:43.706533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.376 ms 00:22:20.482 [2024-10-07 12:32:43.706548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.706678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.706707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:20.482 [2024-10-07 12:32:43.706722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:20.482 [2024-10-07 12:32:43.706740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.713865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.713939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:20.482 [2024-10-07 12:32:43.713956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.074 ms 00:22:20.482 [2024-10-07 12:32:43.713975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.714139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.714162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:20.482 [2024-10-07 12:32:43.714176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:20.482 [2024-10-07 12:32:43.714194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.714230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.714251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:20.482 [2024-10-07 12:32:43.714264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:20.482 [2024-10-07 12:32:43.714282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.714316] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:20.482 [2024-10-07 12:32:43.719137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.719179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:20.482 [2024-10-07 12:32:43.719197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.835 ms 00:22:20.482 [2024-10-07 12:32:43.719213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.719302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.719316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:20.482 [2024-10-07 12:32:43.719332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:20.482 [2024-10-07 12:32:43.719345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.719374] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:20.482 [2024-10-07 12:32:43.719397] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:20.482 [2024-10-07 12:32:43.719449] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:20.482 [2024-10-07 12:32:43.719476] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:20.482 [2024-10-07 12:32:43.719570] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:20.482 [2024-10-07 12:32:43.719586] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:20.482 [2024-10-07 12:32:43.719606] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:20.482 [2024-10-07 12:32:43.719622] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:20.482 [2024-10-07 12:32:43.719639] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:20.482 [2024-10-07 12:32:43.719653] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:20.482 [2024-10-07 12:32:43.719668] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:20.482 [2024-10-07 12:32:43.719679] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:20.482 [2024-10-07 12:32:43.719697] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:20.482 [2024-10-07 12:32:43.719719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.719735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:20.482 [2024-10-07 12:32:43.719748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:22:20.482 [2024-10-07 12:32:43.719763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.719841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.482 [2024-10-07 12:32:43.719864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:20.482 [2024-10-07 12:32:43.719877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:20.482 [2024-10-07 12:32:43.719892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.482 [2024-10-07 12:32:43.720006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:20.482 [2024-10-07 12:32:43.720028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:20.482 [2024-10-07 12:32:43.720042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:20.482 [2024-10-07 12:32:43.720057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.482 [2024-10-07 12:32:43.720069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:20.482 [2024-10-07 12:32:43.720083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:20.482 [2024-10-07 12:32:43.720095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:20.482 [2024-10-07 12:32:43.720114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:20.482 [2024-10-07 12:32:43.720127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:20.482 [2024-10-07 12:32:43.720149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:20.482 [2024-10-07 12:32:43.720161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:20.482 [2024-10-07 12:32:43.720178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:20.482 [2024-10-07 12:32:43.720192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:20.482 [2024-10-07 12:32:43.720210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:20.482 [2024-10-07 12:32:43.720222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:20.482 [2024-10-07 12:32:43.720239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.482 [2024-10-07 12:32:43.720252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:20.482 [2024-10-07 12:32:43.720269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:20.482 [2024-10-07 12:32:43.720294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.482 [2024-10-07 12:32:43.720312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:20.482 [2024-10-07 12:32:43.720324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:20.482 [2024-10-07 12:32:43.720341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.482 [2024-10-07 12:32:43.720353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:20.482 [2024-10-07 12:32:43.720370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:20.482 [2024-10-07 12:32:43.720382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.482 [2024-10-07 12:32:43.720396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:20.482 [2024-10-07 12:32:43.720407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:20.482 [2024-10-07 12:32:43.720422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.482 [2024-10-07 12:32:43.720433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:20.482 [2024-10-07 12:32:43.720447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:20.482 [2024-10-07 12:32:43.720458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.483 [2024-10-07 12:32:43.720474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:20.483 [2024-10-07 12:32:43.720486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:20.483 [2024-10-07 12:32:43.720499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:20.483 [2024-10-07 12:32:43.720510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:20.483 [2024-10-07 12:32:43.720524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:20.483 [2024-10-07 12:32:43.720536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:20.483 [2024-10-07 12:32:43.720550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:20.483 [2024-10-07 12:32:43.720561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:20.483 [2024-10-07 12:32:43.720577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.483 [2024-10-07 12:32:43.720588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:20.483 [2024-10-07 12:32:43.720602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:20.483 [2024-10-07 12:32:43.720614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.483 [2024-10-07 12:32:43.720628] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:20.483 [2024-10-07 12:32:43.720642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:20.483 [2024-10-07 12:32:43.720656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:20.483 [2024-10-07 12:32:43.720668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.483 [2024-10-07 12:32:43.720684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:20.483 [2024-10-07 12:32:43.720695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:20.483 [2024-10-07 12:32:43.720709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:20.483 [2024-10-07 12:32:43.720721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:20.483 [2024-10-07 12:32:43.720734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:20.483 [2024-10-07 12:32:43.720746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:20.483 [2024-10-07 12:32:43.720761] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:20.483 [2024-10-07 12:32:43.720776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:20.483 [2024-10-07 12:32:43.720794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:20.483 [2024-10-07 12:32:43.720806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:20.483 [2024-10-07 12:32:43.720822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:20.483 [2024-10-07 12:32:43.720835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:20.483 [2024-10-07 12:32:43.720850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:20.483 [2024-10-07 12:32:43.720863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:20.483 [2024-10-07 12:32:43.720892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:20.483 [2024-10-07 12:32:43.720915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:20.483 [2024-10-07 12:32:43.720931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:20.483 [2024-10-07 12:32:43.720943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:20.483 [2024-10-07 12:32:43.720959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:20.483 [2024-10-07 12:32:43.720972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:20.483 [2024-10-07 12:32:43.720987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:20.483 [2024-10-07 12:32:43.721000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:20.483 [2024-10-07 12:32:43.721014] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:20.483 [2024-10-07 12:32:43.721028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:20.483 [2024-10-07 12:32:43.721051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:20.483 [2024-10-07 12:32:43.721064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:20.483 [2024-10-07 12:32:43.721079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:20.483 [2024-10-07 12:32:43.721091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:20.483 [2024-10-07 12:32:43.721107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.483 [2024-10-07 12:32:43.721121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:20.483 [2024-10-07 12:32:43.721136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:22:20.483 [2024-10-07 12:32:43.721148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.483 [2024-10-07 12:32:43.764053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.483 [2024-10-07 12:32:43.764122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:20.483 [2024-10-07 12:32:43.764143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.897 ms 00:22:20.483 [2024-10-07 12:32:43.764156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.483 [2024-10-07 12:32:43.764330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.483 [2024-10-07 12:32:43.764345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:20.483 [2024-10-07 12:32:43.764362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:20.483 [2024-10-07 12:32:43.764374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:43.824360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:43.824423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:20.742 [2024-10-07 12:32:43.824446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.049 ms 00:22:20.742 [2024-10-07 12:32:43.824458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:43.824602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:43.824617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:20.742 [2024-10-07 12:32:43.824646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:20.742 [2024-10-07 12:32:43.824659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:43.825143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:43.825161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:20.742 [2024-10-07 12:32:43.825181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:22:20.742 [2024-10-07 12:32:43.825194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:43.825330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:43.825354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:20.742 [2024-10-07 12:32:43.825373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:20.742 [2024-10-07 12:32:43.825392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:43.845982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:43.846046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:20.742 [2024-10-07 12:32:43.846071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.584 ms 00:22:20.742 [2024-10-07 12:32:43.846091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:43.865082] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:20.742 [2024-10-07 12:32:43.865153] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:20.742 [2024-10-07 12:32:43.865179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:43.865194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:20.742 [2024-10-07 12:32:43.865215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.956 ms 00:22:20.742 [2024-10-07 12:32:43.865227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:43.895930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:43.895996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:20.742 [2024-10-07 12:32:43.896022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.634 ms 00:22:20.742 [2024-10-07 12:32:43.896050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:43.915397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:43.915455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:20.742 [2024-10-07 12:32:43.915485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.238 ms 00:22:20.742 [2024-10-07 12:32:43.915498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:43.934785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:43.934844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:20.742 [2024-10-07 12:32:43.934869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.194 ms 00:22:20.742 [2024-10-07 12:32:43.934882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:43.935767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:43.935808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:20.742 [2024-10-07 12:32:43.935829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:22:20.742 [2024-10-07 12:32:43.935850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.742 [2024-10-07 12:32:44.025396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.742 [2024-10-07 12:32:44.025482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:20.742 [2024-10-07 12:32:44.025509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.639 ms 00:22:20.742 [2024-10-07 12:32:44.025530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.000 [2024-10-07 12:32:44.037733] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:21.000 [2024-10-07 12:32:44.054730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.000 [2024-10-07 12:32:44.054824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:21.000 [2024-10-07 12:32:44.054843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.073 ms 00:22:21.000 [2024-10-07 12:32:44.054874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.000 [2024-10-07 12:32:44.055060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.000 [2024-10-07 12:32:44.055084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:21.000 [2024-10-07 12:32:44.055100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:21.000 [2024-10-07 12:32:44.055118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.000 [2024-10-07 12:32:44.055185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.000 [2024-10-07 12:32:44.055204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:21.000 [2024-10-07 12:32:44.055217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:21.000 [2024-10-07 12:32:44.055236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.000 [2024-10-07 12:32:44.055266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.000 [2024-10-07 12:32:44.055286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:21.000 [2024-10-07 12:32:44.055310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:21.000 [2024-10-07 12:32:44.055330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.000 [2024-10-07 12:32:44.055379] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:21.000 [2024-10-07 12:32:44.055412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.000 [2024-10-07 12:32:44.055424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:21.000 [2024-10-07 12:32:44.055442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:21.000 [2024-10-07 12:32:44.055455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.000 [2024-10-07 12:32:44.093441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.000 [2024-10-07 12:32:44.093503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:21.000 [2024-10-07 12:32:44.093529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.006 ms 00:22:21.000 [2024-10-07 12:32:44.093551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.000 [2024-10-07 12:32:44.093710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.000 [2024-10-07 12:32:44.093726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:21.000 [2024-10-07 12:32:44.093745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:21.000 [2024-10-07 12:32:44.093757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.000 [2024-10-07 12:32:44.094886] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:21.000 [2024-10-07 12:32:44.099651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 416.233 ms, result 0 00:22:21.000 [2024-10-07 12:32:44.101039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:21.000 Some configs were skipped because the RPC state that can call them passed over. 00:22:21.000 12:32:44 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:21.259 [2024-10-07 12:32:44.353014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.259 [2024-10-07 12:32:44.353101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:21.259 [2024-10-07 12:32:44.353121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:22:21.259 [2024-10-07 12:32:44.353137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.259 [2024-10-07 12:32:44.353189] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.807 ms, result 0 00:22:21.259 true 00:22:21.259 12:32:44 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:21.517 [2024-10-07 12:32:44.568621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.517 [2024-10-07 12:32:44.568691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:21.517 [2024-10-07 12:32:44.568713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:22:21.517 [2024-10-07 12:32:44.568727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.517 [2024-10-07 12:32:44.568789] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.577 ms, result 0 00:22:21.517 true 00:22:21.517 12:32:44 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76451 00:22:21.517 12:32:44 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76451 ']' 00:22:21.517 12:32:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76451 00:22:21.517 12:32:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:22:21.517 12:32:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.517 12:32:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76451 00:22:21.517 killing process with pid 76451 00:22:21.517 12:32:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.517 12:32:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.517 12:32:44 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76451' 00:22:21.517 12:32:44 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76451 00:22:21.517 12:32:44 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76451 00:22:22.894 [2024-10-07 12:32:45.759483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.759564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:22.894 [2024-10-07 12:32:45.759581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:22.894 [2024-10-07 12:32:45.759596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.759625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:22.894 [2024-10-07 12:32:45.763789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.763831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:22.894 [2024-10-07 12:32:45.763852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.145 ms 00:22:22.894 [2024-10-07 12:32:45.763864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.764130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.764146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:22.894 [2024-10-07 12:32:45.764166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:22:22.894 [2024-10-07 12:32:45.764178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.767610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.767659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:22.894 [2024-10-07 12:32:45.767677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:22:22.894 [2024-10-07 12:32:45.767690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.773337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.773380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:22.894 [2024-10-07 12:32:45.773400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.609 ms 00:22:22.894 [2024-10-07 12:32:45.773416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.789357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.789408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:22.894 [2024-10-07 12:32:45.789432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.896 ms 00:22:22.894 [2024-10-07 12:32:45.789444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.800628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.800682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:22.894 [2024-10-07 12:32:45.800704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.115 ms 00:22:22.894 [2024-10-07 12:32:45.800729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.800892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.800929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:22.894 [2024-10-07 12:32:45.800947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:22.894 [2024-10-07 12:32:45.800962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.816873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.816932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:22.894 [2024-10-07 12:32:45.816958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.876 ms 00:22:22.894 [2024-10-07 12:32:45.816970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.832728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.832783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:22.894 [2024-10-07 12:32:45.832816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.705 ms 00:22:22.894 [2024-10-07 12:32:45.832829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.848138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.848191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:22.894 [2024-10-07 12:32:45.848215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.257 ms 00:22:22.894 [2024-10-07 12:32:45.848227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.863417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.894 [2024-10-07 12:32:45.863470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:22.894 [2024-10-07 12:32:45.863491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.114 ms 00:22:22.894 [2024-10-07 12:32:45.863504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.894 [2024-10-07 12:32:45.863571] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:22.894 [2024-10-07 12:32:45.863592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.863989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:22.894 [2024-10-07 12:32:45.864225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.864986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:22.895 [2024-10-07 12:32:45.865256] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:22.895 [2024-10-07 12:32:45.865279] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8095ccf6-8e51-4990-b00e-376620ab48a2 00:22:22.895 [2024-10-07 12:32:45.865293] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:22.895 [2024-10-07 12:32:45.865311] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:22.895 [2024-10-07 12:32:45.865324] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:22.895 [2024-10-07 12:32:45.865342] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:22.895 [2024-10-07 12:32:45.865371] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:22.895 [2024-10-07 12:32:45.865390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:22.895 [2024-10-07 12:32:45.865409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:22.895 [2024-10-07 12:32:45.865426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:22.895 [2024-10-07 12:32:45.865438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:22.895 [2024-10-07 12:32:45.865455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.895 [2024-10-07 12:32:45.865468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:22.895 [2024-10-07 12:32:45.865489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:22:22.895 [2024-10-07 12:32:45.865501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.895 [2024-10-07 12:32:45.886106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.895 [2024-10-07 12:32:45.886160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:22.895 [2024-10-07 12:32:45.886188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.593 ms 00:22:22.895 [2024-10-07 12:32:45.886201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.895 [2024-10-07 12:32:45.886856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.895 [2024-10-07 12:32:45.886884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:22.895 [2024-10-07 12:32:45.886921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:22:22.895 [2024-10-07 12:32:45.886935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.895 [2024-10-07 12:32:45.952172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.895 [2024-10-07 12:32:45.952241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:22.895 [2024-10-07 12:32:45.952265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.895 [2024-10-07 12:32:45.952285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.895 [2024-10-07 12:32:45.952428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.895 [2024-10-07 12:32:45.952443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:22.895 [2024-10-07 12:32:45.952462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.895 [2024-10-07 12:32:45.952474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.895 [2024-10-07 12:32:45.952546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.895 [2024-10-07 12:32:45.952562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:22.895 [2024-10-07 12:32:45.952586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.895 [2024-10-07 12:32:45.952599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.895 [2024-10-07 12:32:45.952636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.895 [2024-10-07 12:32:45.952649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:22.895 [2024-10-07 12:32:45.952667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.895 [2024-10-07 12:32:45.952680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.895 [2024-10-07 12:32:46.080271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.895 [2024-10-07 12:32:46.080347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:22.895 [2024-10-07 12:32:46.080373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.895 [2024-10-07 12:32:46.080394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.154 [2024-10-07 12:32:46.185777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.154 [2024-10-07 12:32:46.185855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.154 [2024-10-07 12:32:46.185881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.155 [2024-10-07 12:32:46.185894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.155 [2024-10-07 12:32:46.186046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.155 [2024-10-07 12:32:46.186062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.155 [2024-10-07 12:32:46.186087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.155 [2024-10-07 12:32:46.186100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.155 [2024-10-07 12:32:46.186139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.155 [2024-10-07 12:32:46.186160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.155 [2024-10-07 12:32:46.186177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.155 [2024-10-07 12:32:46.186190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.155 [2024-10-07 12:32:46.186343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.155 [2024-10-07 12:32:46.186360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.155 [2024-10-07 12:32:46.186379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.155 [2024-10-07 12:32:46.186391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.155 [2024-10-07 12:32:46.186443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.155 [2024-10-07 12:32:46.186459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:23.155 [2024-10-07 12:32:46.186485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.155 [2024-10-07 12:32:46.186498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.155 [2024-10-07 12:32:46.186547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.155 [2024-10-07 12:32:46.186561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.155 [2024-10-07 12:32:46.186584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.155 [2024-10-07 12:32:46.186598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.155 [2024-10-07 12:32:46.186653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.155 [2024-10-07 12:32:46.186673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.155 [2024-10-07 12:32:46.186693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.155 [2024-10-07 12:32:46.186706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.155 [2024-10-07 12:32:46.186868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 428.053 ms, result 0 00:22:24.090 12:32:47 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:24.348 [2024-10-07 12:32:47.456653] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:22:24.348 [2024-10-07 12:32:47.456783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76527 ] 00:22:24.348 [2024-10-07 12:32:47.629611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.606 [2024-10-07 12:32:47.849257] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.174 [2024-10-07 12:32:48.230500] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:25.174 [2024-10-07 12:32:48.230586] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:25.174 [2024-10-07 12:32:48.395152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.174 [2024-10-07 12:32:48.395226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:25.174 [2024-10-07 12:32:48.395248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:25.174 [2024-10-07 12:32:48.395261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.174 [2024-10-07 12:32:48.398401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.174 [2024-10-07 12:32:48.398450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.174 [2024-10-07 12:32:48.398466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:22:25.174 [2024-10-07 12:32:48.398482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.174 [2024-10-07 12:32:48.398605] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:25.174 [2024-10-07 12:32:48.399568] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:25.174 [2024-10-07 12:32:48.399606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.175 [2024-10-07 12:32:48.399623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.175 [2024-10-07 12:32:48.399637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:22:25.175 [2024-10-07 12:32:48.399649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.175 [2024-10-07 12:32:48.401283] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:25.175 [2024-10-07 12:32:48.421321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.175 [2024-10-07 12:32:48.421372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:25.175 [2024-10-07 12:32:48.421390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.069 ms 00:22:25.175 [2024-10-07 12:32:48.421403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.175 [2024-10-07 12:32:48.421542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.175 [2024-10-07 12:32:48.421558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:25.175 [2024-10-07 12:32:48.421576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:25.175 [2024-10-07 12:32:48.421588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.175 [2024-10-07 12:32:48.428850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.175 [2024-10-07 12:32:48.428887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.175 [2024-10-07 12:32:48.428912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.221 ms 00:22:25.175 [2024-10-07 12:32:48.428926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.175 [2024-10-07 12:32:48.429059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.175 [2024-10-07 12:32:48.429082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.175 [2024-10-07 12:32:48.429096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:25.175 [2024-10-07 12:32:48.429108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.175 [2024-10-07 12:32:48.429147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.175 [2024-10-07 12:32:48.429161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:25.175 [2024-10-07 12:32:48.429174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:25.175 [2024-10-07 12:32:48.429187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.175 [2024-10-07 12:32:48.429230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:25.175 [2024-10-07 12:32:48.434072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.175 [2024-10-07 12:32:48.434132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.175 [2024-10-07 12:32:48.434147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.857 ms 00:22:25.175 [2024-10-07 12:32:48.434160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.175 [2024-10-07 12:32:48.434248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.175 [2024-10-07 12:32:48.434269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:25.175 [2024-10-07 12:32:48.434282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:25.175 [2024-10-07 12:32:48.434294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.175 [2024-10-07 12:32:48.434322] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:25.175 [2024-10-07 12:32:48.434347] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:25.175 [2024-10-07 12:32:48.434386] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:25.175 [2024-10-07 12:32:48.434407] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:25.175 [2024-10-07 12:32:48.434502] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:25.175 [2024-10-07 12:32:48.434518] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:25.175 [2024-10-07 12:32:48.434533] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:25.175 [2024-10-07 12:32:48.434549] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:25.175 [2024-10-07 12:32:48.434563] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:25.175 [2024-10-07 12:32:48.434576] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:25.175 [2024-10-07 12:32:48.434588] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:25.175 [2024-10-07 12:32:48.434600] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:25.175 [2024-10-07 12:32:48.434611] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:25.175 [2024-10-07 12:32:48.434623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.175 [2024-10-07 12:32:48.434639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:25.175 [2024-10-07 12:32:48.434651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:22:25.175 [2024-10-07 12:32:48.434663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.175 [2024-10-07 12:32:48.434744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.175 [2024-10-07 12:32:48.434757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:25.175 [2024-10-07 12:32:48.434769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:25.175 [2024-10-07 12:32:48.434780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.175 [2024-10-07 12:32:48.434873] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:25.175 [2024-10-07 12:32:48.434888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:25.175 [2024-10-07 12:32:48.434922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:25.175 [2024-10-07 12:32:48.434934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.175 [2024-10-07 12:32:48.434947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:25.175 [2024-10-07 12:32:48.434958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:25.176 [2024-10-07 12:32:48.434970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:25.176 [2024-10-07 12:32:48.434989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:25.176 [2024-10-07 12:32:48.435001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:25.176 [2024-10-07 12:32:48.435024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:25.176 [2024-10-07 12:32:48.435048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:25.176 [2024-10-07 12:32:48.435059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:25.176 [2024-10-07 12:32:48.435071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:25.176 [2024-10-07 12:32:48.435082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:25.176 [2024-10-07 12:32:48.435093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:25.176 [2024-10-07 12:32:48.435115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:25.176 [2024-10-07 12:32:48.435125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:25.176 [2024-10-07 12:32:48.435147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.176 [2024-10-07 12:32:48.435169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:25.176 [2024-10-07 12:32:48.435180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.176 [2024-10-07 12:32:48.435203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:25.176 [2024-10-07 12:32:48.435214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.176 [2024-10-07 12:32:48.435236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:25.176 [2024-10-07 12:32:48.435246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.176 [2024-10-07 12:32:48.435269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:25.176 [2024-10-07 12:32:48.435279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:25.176 [2024-10-07 12:32:48.435301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:25.176 [2024-10-07 12:32:48.435312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:25.176 [2024-10-07 12:32:48.435323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:25.176 [2024-10-07 12:32:48.435334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:25.176 [2024-10-07 12:32:48.435345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:25.176 [2024-10-07 12:32:48.435355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:25.176 [2024-10-07 12:32:48.435377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:25.176 [2024-10-07 12:32:48.435388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435399] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:25.176 [2024-10-07 12:32:48.435411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:25.176 [2024-10-07 12:32:48.435422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:25.176 [2024-10-07 12:32:48.435434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.176 [2024-10-07 12:32:48.435445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:25.176 [2024-10-07 12:32:48.435456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:25.176 [2024-10-07 12:32:48.435467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:25.176 [2024-10-07 12:32:48.435478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:25.176 [2024-10-07 12:32:48.435489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:25.176 [2024-10-07 12:32:48.435500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:25.176 [2024-10-07 12:32:48.435512] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:25.176 [2024-10-07 12:32:48.435531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:25.176 [2024-10-07 12:32:48.435544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:25.176 [2024-10-07 12:32:48.435557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:25.176 [2024-10-07 12:32:48.435571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:25.176 [2024-10-07 12:32:48.435583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:25.176 [2024-10-07 12:32:48.435595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:25.176 [2024-10-07 12:32:48.435607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:25.176 [2024-10-07 12:32:48.435619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:25.176 [2024-10-07 12:32:48.435631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:25.176 [2024-10-07 12:32:48.435643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:25.177 [2024-10-07 12:32:48.435656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:25.177 [2024-10-07 12:32:48.435667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:25.177 [2024-10-07 12:32:48.435679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:25.177 [2024-10-07 12:32:48.435690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:25.177 [2024-10-07 12:32:48.435703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:25.177 [2024-10-07 12:32:48.435714] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:25.177 [2024-10-07 12:32:48.435727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:25.177 [2024-10-07 12:32:48.435750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:25.177 [2024-10-07 12:32:48.435762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:25.177 [2024-10-07 12:32:48.435774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:25.177 [2024-10-07 12:32:48.435786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:25.177 [2024-10-07 12:32:48.435799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.177 [2024-10-07 12:32:48.435815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:25.177 [2024-10-07 12:32:48.435827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:22:25.177 [2024-10-07 12:32:48.435840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.483804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.483879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:25.436 [2024-10-07 12:32:48.483897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.962 ms 00:22:25.436 [2024-10-07 12:32:48.483921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.484099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.484115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:25.436 [2024-10-07 12:32:48.484128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:25.436 [2024-10-07 12:32:48.484140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.530242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.530304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:25.436 [2024-10-07 12:32:48.530323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.145 ms 00:22:25.436 [2024-10-07 12:32:48.530335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.530490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.530505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:25.436 [2024-10-07 12:32:48.530518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:25.436 [2024-10-07 12:32:48.530531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.531021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.531047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:25.436 [2024-10-07 12:32:48.531060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:22:25.436 [2024-10-07 12:32:48.531072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.531209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.531226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:25.436 [2024-10-07 12:32:48.531238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:25.436 [2024-10-07 12:32:48.531250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.550432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.550497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:25.436 [2024-10-07 12:32:48.550515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.183 ms 00:22:25.436 [2024-10-07 12:32:48.550528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.570125] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:25.436 [2024-10-07 12:32:48.570186] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:25.436 [2024-10-07 12:32:48.570205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.570218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:25.436 [2024-10-07 12:32:48.570233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.530 ms 00:22:25.436 [2024-10-07 12:32:48.570244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.600498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.600566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:25.436 [2024-10-07 12:32:48.600596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.188 ms 00:22:25.436 [2024-10-07 12:32:48.600608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.619527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.619594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:25.436 [2024-10-07 12:32:48.619612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.812 ms 00:22:25.436 [2024-10-07 12:32:48.619624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.638078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.638136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:25.436 [2024-10-07 12:32:48.638154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.365 ms 00:22:25.436 [2024-10-07 12:32:48.638165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.639039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.639077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:25.436 [2024-10-07 12:32:48.639092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:22:25.436 [2024-10-07 12:32:48.639104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.436 [2024-10-07 12:32:48.726108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.436 [2024-10-07 12:32:48.726185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:25.436 [2024-10-07 12:32:48.726204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.105 ms 00:22:25.436 [2024-10-07 12:32:48.726217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.695 [2024-10-07 12:32:48.737948] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:25.695 [2024-10-07 12:32:48.754871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.695 [2024-10-07 12:32:48.754955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:25.695 [2024-10-07 12:32:48.754973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.518 ms 00:22:25.695 [2024-10-07 12:32:48.754992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.695 [2024-10-07 12:32:48.755153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.695 [2024-10-07 12:32:48.755169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:25.695 [2024-10-07 12:32:48.755182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:25.695 [2024-10-07 12:32:48.755194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.695 [2024-10-07 12:32:48.755261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.695 [2024-10-07 12:32:48.755280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:25.695 [2024-10-07 12:32:48.755293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:25.695 [2024-10-07 12:32:48.755305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.695 [2024-10-07 12:32:48.755337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.695 [2024-10-07 12:32:48.755350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:25.695 [2024-10-07 12:32:48.755363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:25.695 [2024-10-07 12:32:48.755374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.695 [2024-10-07 12:32:48.755414] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:25.695 [2024-10-07 12:32:48.755427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.695 [2024-10-07 12:32:48.755444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:25.695 [2024-10-07 12:32:48.755456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:25.695 [2024-10-07 12:32:48.755468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.695 [2024-10-07 12:32:48.792648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.695 [2024-10-07 12:32:48.792718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:25.695 [2024-10-07 12:32:48.792736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.209 ms 00:22:25.695 [2024-10-07 12:32:48.792748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.695 [2024-10-07 12:32:48.792926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.695 [2024-10-07 12:32:48.792943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:25.695 [2024-10-07 12:32:48.792957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:25.695 [2024-10-07 12:32:48.792969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.695 [2024-10-07 12:32:48.794020] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:25.695 [2024-10-07 12:32:48.798571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 399.176 ms, result 0 00:22:25.695 [2024-10-07 12:32:48.799539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:25.695 [2024-10-07 12:32:48.818323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:26.630  [2024-10-07T12:32:51.296Z] Copying: 24/256 [MB] (24 MBps) [2024-10-07T12:32:52.231Z] Copying: 46/256 [MB] (21 MBps) [2024-10-07T12:32:53.172Z] Copying: 68/256 [MB] (21 MBps) [2024-10-07T12:32:54.147Z] Copying: 89/256 [MB] (21 MBps) [2024-10-07T12:32:55.081Z] Copying: 111/256 [MB] (21 MBps) [2024-10-07T12:32:56.015Z] Copying: 132/256 [MB] (21 MBps) [2024-10-07T12:32:56.950Z] Copying: 154/256 [MB] (21 MBps) [2024-10-07T12:32:57.885Z] Copying: 176/256 [MB] (21 MBps) [2024-10-07T12:32:59.259Z] Copying: 198/256 [MB] (21 MBps) [2024-10-07T12:33:00.192Z] Copying: 220/256 [MB] (21 MBps) [2024-10-07T12:33:00.759Z] Copying: 242/256 [MB] (22 MBps) [2024-10-07T12:33:01.018Z] Copying: 256/256 [MB] (average 22 MBps)[2024-10-07 12:33:00.911753] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:37.727 [2024-10-07 12:33:00.937432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.727 [2024-10-07 12:33:00.937505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:37.727 [2024-10-07 12:33:00.937523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:37.727 [2024-10-07 12:33:00.937536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.727 [2024-10-07 12:33:00.937570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:37.727 [2024-10-07 12:33:00.941736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.727 [2024-10-07 12:33:00.941774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:37.727 [2024-10-07 12:33:00.941789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.152 ms 00:22:37.727 [2024-10-07 12:33:00.941801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.727 [2024-10-07 12:33:00.942072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.727 [2024-10-07 12:33:00.942094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:37.727 [2024-10-07 12:33:00.942107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:22:37.727 [2024-10-07 12:33:00.942119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.727 [2024-10-07 12:33:00.944993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.727 [2024-10-07 12:33:00.945022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:37.727 [2024-10-07 12:33:00.945036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.859 ms 00:22:37.727 [2024-10-07 12:33:00.945048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.727 [2024-10-07 12:33:00.950648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.727 [2024-10-07 12:33:00.950691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:37.727 [2024-10-07 12:33:00.950712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.584 ms 00:22:37.727 [2024-10-07 12:33:00.950724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.727 [2024-10-07 12:33:00.989129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.727 [2024-10-07 12:33:00.989198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:37.727 [2024-10-07 12:33:00.989217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.373 ms 00:22:37.727 [2024-10-07 12:33:00.989231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.727 [2024-10-07 12:33:01.011641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.727 [2024-10-07 12:33:01.011716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:37.727 [2024-10-07 12:33:01.011735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.356 ms 00:22:37.727 [2024-10-07 12:33:01.011747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.727 [2024-10-07 12:33:01.011991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.727 [2024-10-07 12:33:01.012008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:37.727 [2024-10-07 12:33:01.012022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:22:37.727 [2024-10-07 12:33:01.012034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.987 [2024-10-07 12:33:01.050987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.987 [2024-10-07 12:33:01.051055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:37.987 [2024-10-07 12:33:01.051074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.980 ms 00:22:37.987 [2024-10-07 12:33:01.051086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.987 [2024-10-07 12:33:01.088370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.987 [2024-10-07 12:33:01.088429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:37.987 [2024-10-07 12:33:01.088447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.252 ms 00:22:37.987 [2024-10-07 12:33:01.088459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.987 [2024-10-07 12:33:01.125711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.987 [2024-10-07 12:33:01.125794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:37.987 [2024-10-07 12:33:01.125812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.229 ms 00:22:37.987 [2024-10-07 12:33:01.125825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.987 [2024-10-07 12:33:01.162716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.987 [2024-10-07 12:33:01.162781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:37.987 [2024-10-07 12:33:01.162799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.816 ms 00:22:37.987 [2024-10-07 12:33:01.162811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.987 [2024-10-07 12:33:01.162890] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:37.987 [2024-10-07 12:33:01.162935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.162949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.162962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.162982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.162997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:37.987 [2024-10-07 12:33:01.163584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.163997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:37.988 [2024-10-07 12:33:01.164212] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:37.988 [2024-10-07 12:33:01.164224] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8095ccf6-8e51-4990-b00e-376620ab48a2 00:22:37.988 [2024-10-07 12:33:01.164237] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:37.988 [2024-10-07 12:33:01.164248] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:37.988 [2024-10-07 12:33:01.164260] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:37.988 [2024-10-07 12:33:01.164278] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:37.988 [2024-10-07 12:33:01.164289] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:37.988 [2024-10-07 12:33:01.164301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:37.988 [2024-10-07 12:33:01.164313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:37.988 [2024-10-07 12:33:01.164324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:37.988 [2024-10-07 12:33:01.164335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:37.988 [2024-10-07 12:33:01.164347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.988 [2024-10-07 12:33:01.164359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:37.988 [2024-10-07 12:33:01.164371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:22:37.988 [2024-10-07 12:33:01.164384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.988 [2024-10-07 12:33:01.184939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.988 [2024-10-07 12:33:01.185000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:37.988 [2024-10-07 12:33:01.185016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.560 ms 00:22:37.988 [2024-10-07 12:33:01.185029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.988 [2024-10-07 12:33:01.185615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.988 [2024-10-07 12:33:01.185641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:37.988 [2024-10-07 12:33:01.185654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:22:37.988 [2024-10-07 12:33:01.185666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.988 [2024-10-07 12:33:01.233372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.988 [2024-10-07 12:33:01.233441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:37.988 [2024-10-07 12:33:01.233458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.988 [2024-10-07 12:33:01.233471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.988 [2024-10-07 12:33:01.233612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.988 [2024-10-07 12:33:01.233628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:37.988 [2024-10-07 12:33:01.233642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.988 [2024-10-07 12:33:01.233654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.988 [2024-10-07 12:33:01.233716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.988 [2024-10-07 12:33:01.233736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:37.988 [2024-10-07 12:33:01.233748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.988 [2024-10-07 12:33:01.233760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.988 [2024-10-07 12:33:01.233783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.988 [2024-10-07 12:33:01.233795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:37.988 [2024-10-07 12:33:01.233807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.988 [2024-10-07 12:33:01.233819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.248 [2024-10-07 12:33:01.359723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.248 [2024-10-07 12:33:01.359814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:38.248 [2024-10-07 12:33:01.359832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.248 [2024-10-07 12:33:01.359844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.248 [2024-10-07 12:33:01.462999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.248 [2024-10-07 12:33:01.463069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:38.248 [2024-10-07 12:33:01.463087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.248 [2024-10-07 12:33:01.463099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.248 [2024-10-07 12:33:01.463202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.248 [2024-10-07 12:33:01.463217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:38.248 [2024-10-07 12:33:01.463240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.248 [2024-10-07 12:33:01.463252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.248 [2024-10-07 12:33:01.463286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.248 [2024-10-07 12:33:01.463299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:38.248 [2024-10-07 12:33:01.463311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.248 [2024-10-07 12:33:01.463323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.248 [2024-10-07 12:33:01.463441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.248 [2024-10-07 12:33:01.463457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:38.248 [2024-10-07 12:33:01.463470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.248 [2024-10-07 12:33:01.463487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.248 [2024-10-07 12:33:01.463532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.248 [2024-10-07 12:33:01.463547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:38.248 [2024-10-07 12:33:01.463559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.248 [2024-10-07 12:33:01.463571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.248 [2024-10-07 12:33:01.463615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.248 [2024-10-07 12:33:01.463628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:38.248 [2024-10-07 12:33:01.463640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.248 [2024-10-07 12:33:01.463657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.248 [2024-10-07 12:33:01.463705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.248 [2024-10-07 12:33:01.463720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:38.248 [2024-10-07 12:33:01.463731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.248 [2024-10-07 12:33:01.463744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.248 [2024-10-07 12:33:01.463920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.335 ms, result 0 00:22:39.625 00:22:39.625 00:22:39.625 12:33:02 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:39.884 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:39.884 12:33:03 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:39.884 12:33:03 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:39.884 12:33:03 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:39.884 12:33:03 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:39.884 12:33:03 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:40.143 12:33:03 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:40.143 12:33:03 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76451 00:22:40.143 12:33:03 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76451 ']' 00:22:40.143 12:33:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76451 00:22:40.143 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76451) - No such process 00:22:40.143 Process with pid 76451 is not found 00:22:40.143 12:33:03 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 76451 is not found' 00:22:40.143 00:22:40.143 real 1m17.161s 00:22:40.143 user 1m43.765s 00:22:40.143 sys 0m7.171s 00:22:40.143 12:33:03 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:40.143 12:33:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:40.143 ************************************ 00:22:40.143 END TEST ftl_trim 00:22:40.143 ************************************ 00:22:40.143 12:33:03 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:40.143 12:33:03 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:40.143 12:33:03 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:40.143 12:33:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:40.143 ************************************ 00:22:40.143 START TEST ftl_restore 00:22:40.143 ************************************ 00:22:40.143 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:40.403 * Looking for test storage... 00:22:40.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:40.403 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:40.403 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:22:40.403 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:40.403 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:40.403 12:33:03 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:40.404 12:33:03 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:40.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.404 --rc genhtml_branch_coverage=1 00:22:40.404 --rc genhtml_function_coverage=1 00:22:40.404 --rc genhtml_legend=1 00:22:40.404 --rc geninfo_all_blocks=1 00:22:40.404 --rc geninfo_unexecuted_blocks=1 00:22:40.404 00:22:40.404 ' 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:40.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.404 --rc genhtml_branch_coverage=1 00:22:40.404 --rc genhtml_function_coverage=1 00:22:40.404 --rc genhtml_legend=1 00:22:40.404 --rc geninfo_all_blocks=1 00:22:40.404 --rc geninfo_unexecuted_blocks=1 00:22:40.404 00:22:40.404 ' 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:40.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.404 --rc genhtml_branch_coverage=1 00:22:40.404 --rc genhtml_function_coverage=1 00:22:40.404 --rc genhtml_legend=1 00:22:40.404 --rc geninfo_all_blocks=1 00:22:40.404 --rc geninfo_unexecuted_blocks=1 00:22:40.404 00:22:40.404 ' 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:40.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.404 --rc genhtml_branch_coverage=1 00:22:40.404 --rc genhtml_function_coverage=1 00:22:40.404 --rc genhtml_legend=1 00:22:40.404 --rc geninfo_all_blocks=1 00:22:40.404 --rc geninfo_unexecuted_blocks=1 00:22:40.404 00:22:40.404 ' 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.lC5b0QatnN 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76760 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:40.404 12:33:03 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76760 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 76760 ']' 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.404 12:33:03 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:40.662 [2024-10-07 12:33:03.702795] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:22:40.663 [2024-10-07 12:33:03.702942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76760 ] 00:22:40.663 [2024-10-07 12:33:03.877868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.921 [2024-10-07 12:33:04.091996] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.856 12:33:04 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.856 12:33:04 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:22:41.856 12:33:04 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:41.856 12:33:04 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:41.856 12:33:04 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:41.856 12:33:04 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:41.856 12:33:04 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:41.856 12:33:04 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:42.114 12:33:05 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:42.114 12:33:05 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:42.114 12:33:05 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:42.114 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:42.114 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:42.114 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:42.114 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:42.114 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:42.372 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:42.372 { 00:22:42.372 "name": "nvme0n1", 00:22:42.372 "aliases": [ 00:22:42.372 "70fd204a-33ee-41e4-b8e2-74ccd2600f63" 00:22:42.372 ], 00:22:42.372 "product_name": "NVMe disk", 00:22:42.372 "block_size": 4096, 00:22:42.372 "num_blocks": 1310720, 00:22:42.372 "uuid": "70fd204a-33ee-41e4-b8e2-74ccd2600f63", 00:22:42.372 "numa_id": -1, 00:22:42.372 "assigned_rate_limits": { 00:22:42.372 "rw_ios_per_sec": 0, 00:22:42.372 "rw_mbytes_per_sec": 0, 00:22:42.372 "r_mbytes_per_sec": 0, 00:22:42.372 "w_mbytes_per_sec": 0 00:22:42.372 }, 00:22:42.372 "claimed": true, 00:22:42.372 "claim_type": "read_many_write_one", 00:22:42.372 "zoned": false, 00:22:42.372 "supported_io_types": { 00:22:42.372 "read": true, 00:22:42.372 "write": true, 00:22:42.372 "unmap": true, 00:22:42.372 "flush": true, 00:22:42.372 "reset": true, 00:22:42.372 "nvme_admin": true, 00:22:42.372 "nvme_io": true, 00:22:42.372 "nvme_io_md": false, 00:22:42.372 "write_zeroes": true, 00:22:42.372 "zcopy": false, 00:22:42.372 "get_zone_info": false, 00:22:42.372 "zone_management": false, 00:22:42.372 "zone_append": false, 00:22:42.372 "compare": true, 00:22:42.372 "compare_and_write": false, 00:22:42.372 "abort": true, 00:22:42.372 "seek_hole": false, 00:22:42.372 "seek_data": false, 00:22:42.372 "copy": true, 00:22:42.372 "nvme_iov_md": false 00:22:42.372 }, 00:22:42.372 "driver_specific": { 00:22:42.372 "nvme": [ 00:22:42.372 { 00:22:42.372 "pci_address": "0000:00:11.0", 00:22:42.372 "trid": { 00:22:42.372 "trtype": "PCIe", 00:22:42.372 "traddr": "0000:00:11.0" 00:22:42.372 }, 00:22:42.372 "ctrlr_data": { 00:22:42.372 "cntlid": 0, 00:22:42.372 "vendor_id": "0x1b36", 00:22:42.372 "model_number": "QEMU NVMe Ctrl", 00:22:42.372 "serial_number": "12341", 00:22:42.372 "firmware_revision": "8.0.0", 00:22:42.372 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:42.372 "oacs": { 00:22:42.372 "security": 0, 00:22:42.372 "format": 1, 00:22:42.372 "firmware": 0, 00:22:42.372 "ns_manage": 1 00:22:42.372 }, 00:22:42.372 "multi_ctrlr": false, 00:22:42.372 "ana_reporting": false 00:22:42.372 }, 00:22:42.372 "vs": { 00:22:42.372 "nvme_version": "1.4" 00:22:42.372 }, 00:22:42.372 "ns_data": { 00:22:42.372 "id": 1, 00:22:42.372 "can_share": false 00:22:42.372 } 00:22:42.372 } 00:22:42.372 ], 00:22:42.372 "mp_policy": "active_passive" 00:22:42.372 } 00:22:42.372 } 00:22:42.372 ]' 00:22:42.372 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:42.372 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:42.372 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:42.372 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:42.372 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:42.372 12:33:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:22:42.372 12:33:05 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:42.372 12:33:05 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:42.372 12:33:05 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:42.372 12:33:05 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:42.372 12:33:05 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:42.630 12:33:05 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=1e97e3a4-9cb2-4c79-9a40-2ec513249572 00:22:42.630 12:33:05 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:42.630 12:33:05 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e97e3a4-9cb2-4c79-9a40-2ec513249572 00:22:42.888 12:33:06 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:43.146 12:33:06 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=70db2f1d-583a-4651-86bc-4e3171117211 00:22:43.146 12:33:06 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 70db2f1d-583a-4651-86bc-4e3171117211 00:22:43.404 12:33:06 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:43.404 12:33:06 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:43.404 12:33:06 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:43.404 12:33:06 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:43.404 12:33:06 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:43.404 12:33:06 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:43.404 12:33:06 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:43.404 12:33:06 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:43.404 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:43.404 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:43.404 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:43.404 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:43.404 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:43.663 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:43.663 { 00:22:43.663 "name": "61cc76f7-59fc-4acd-94b4-23eb2adedcdc", 00:22:43.663 "aliases": [ 00:22:43.663 "lvs/nvme0n1p0" 00:22:43.663 ], 00:22:43.663 "product_name": "Logical Volume", 00:22:43.663 "block_size": 4096, 00:22:43.663 "num_blocks": 26476544, 00:22:43.663 "uuid": "61cc76f7-59fc-4acd-94b4-23eb2adedcdc", 00:22:43.663 "assigned_rate_limits": { 00:22:43.663 "rw_ios_per_sec": 0, 00:22:43.663 "rw_mbytes_per_sec": 0, 00:22:43.663 "r_mbytes_per_sec": 0, 00:22:43.663 "w_mbytes_per_sec": 0 00:22:43.663 }, 00:22:43.663 "claimed": false, 00:22:43.663 "zoned": false, 00:22:43.663 "supported_io_types": { 00:22:43.663 "read": true, 00:22:43.663 "write": true, 00:22:43.663 "unmap": true, 00:22:43.663 "flush": false, 00:22:43.663 "reset": true, 00:22:43.663 "nvme_admin": false, 00:22:43.663 "nvme_io": false, 00:22:43.663 "nvme_io_md": false, 00:22:43.663 "write_zeroes": true, 00:22:43.663 "zcopy": false, 00:22:43.663 "get_zone_info": false, 00:22:43.663 "zone_management": false, 00:22:43.663 "zone_append": false, 00:22:43.663 "compare": false, 00:22:43.663 "compare_and_write": false, 00:22:43.663 "abort": false, 00:22:43.663 "seek_hole": true, 00:22:43.663 "seek_data": true, 00:22:43.663 "copy": false, 00:22:43.663 "nvme_iov_md": false 00:22:43.663 }, 00:22:43.663 "driver_specific": { 00:22:43.663 "lvol": { 00:22:43.663 "lvol_store_uuid": "70db2f1d-583a-4651-86bc-4e3171117211", 00:22:43.663 "base_bdev": "nvme0n1", 00:22:43.663 "thin_provision": true, 00:22:43.663 "num_allocated_clusters": 0, 00:22:43.663 "snapshot": false, 00:22:43.663 "clone": false, 00:22:43.663 "esnap_clone": false 00:22:43.663 } 00:22:43.663 } 00:22:43.663 } 00:22:43.663 ]' 00:22:43.663 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:43.663 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:43.663 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:43.663 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:43.663 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:43.663 12:33:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:43.663 12:33:06 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:43.663 12:33:06 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:43.663 12:33:06 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:43.921 12:33:07 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:43.921 12:33:07 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:43.921 12:33:07 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:43.921 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:43.921 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:43.921 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:43.921 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:43.921 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:44.179 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:44.179 { 00:22:44.179 "name": "61cc76f7-59fc-4acd-94b4-23eb2adedcdc", 00:22:44.179 "aliases": [ 00:22:44.179 "lvs/nvme0n1p0" 00:22:44.179 ], 00:22:44.179 "product_name": "Logical Volume", 00:22:44.179 "block_size": 4096, 00:22:44.179 "num_blocks": 26476544, 00:22:44.179 "uuid": "61cc76f7-59fc-4acd-94b4-23eb2adedcdc", 00:22:44.179 "assigned_rate_limits": { 00:22:44.179 "rw_ios_per_sec": 0, 00:22:44.179 "rw_mbytes_per_sec": 0, 00:22:44.179 "r_mbytes_per_sec": 0, 00:22:44.179 "w_mbytes_per_sec": 0 00:22:44.179 }, 00:22:44.179 "claimed": false, 00:22:44.179 "zoned": false, 00:22:44.179 "supported_io_types": { 00:22:44.179 "read": true, 00:22:44.179 "write": true, 00:22:44.179 "unmap": true, 00:22:44.179 "flush": false, 00:22:44.179 "reset": true, 00:22:44.179 "nvme_admin": false, 00:22:44.179 "nvme_io": false, 00:22:44.179 "nvme_io_md": false, 00:22:44.179 "write_zeroes": true, 00:22:44.179 "zcopy": false, 00:22:44.179 "get_zone_info": false, 00:22:44.179 "zone_management": false, 00:22:44.179 "zone_append": false, 00:22:44.179 "compare": false, 00:22:44.179 "compare_and_write": false, 00:22:44.179 "abort": false, 00:22:44.179 "seek_hole": true, 00:22:44.179 "seek_data": true, 00:22:44.179 "copy": false, 00:22:44.179 "nvme_iov_md": false 00:22:44.179 }, 00:22:44.179 "driver_specific": { 00:22:44.179 "lvol": { 00:22:44.179 "lvol_store_uuid": "70db2f1d-583a-4651-86bc-4e3171117211", 00:22:44.179 "base_bdev": "nvme0n1", 00:22:44.179 "thin_provision": true, 00:22:44.179 "num_allocated_clusters": 0, 00:22:44.179 "snapshot": false, 00:22:44.179 "clone": false, 00:22:44.179 "esnap_clone": false 00:22:44.179 } 00:22:44.179 } 00:22:44.179 } 00:22:44.179 ]' 00:22:44.179 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:44.179 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:44.179 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:44.179 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:44.179 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:44.179 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:44.179 12:33:07 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:44.180 12:33:07 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:44.459 12:33:07 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:44.459 12:33:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:44.459 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:44.459 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:44.459 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:44.459 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:44.459 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61cc76f7-59fc-4acd-94b4-23eb2adedcdc 00:22:44.733 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:44.733 { 00:22:44.733 "name": "61cc76f7-59fc-4acd-94b4-23eb2adedcdc", 00:22:44.733 "aliases": [ 00:22:44.733 "lvs/nvme0n1p0" 00:22:44.733 ], 00:22:44.733 "product_name": "Logical Volume", 00:22:44.733 "block_size": 4096, 00:22:44.733 "num_blocks": 26476544, 00:22:44.733 "uuid": "61cc76f7-59fc-4acd-94b4-23eb2adedcdc", 00:22:44.733 "assigned_rate_limits": { 00:22:44.733 "rw_ios_per_sec": 0, 00:22:44.733 "rw_mbytes_per_sec": 0, 00:22:44.733 "r_mbytes_per_sec": 0, 00:22:44.733 "w_mbytes_per_sec": 0 00:22:44.733 }, 00:22:44.733 "claimed": false, 00:22:44.733 "zoned": false, 00:22:44.733 "supported_io_types": { 00:22:44.733 "read": true, 00:22:44.733 "write": true, 00:22:44.733 "unmap": true, 00:22:44.733 "flush": false, 00:22:44.733 "reset": true, 00:22:44.733 "nvme_admin": false, 00:22:44.733 "nvme_io": false, 00:22:44.733 "nvme_io_md": false, 00:22:44.733 "write_zeroes": true, 00:22:44.733 "zcopy": false, 00:22:44.733 "get_zone_info": false, 00:22:44.733 "zone_management": false, 00:22:44.733 "zone_append": false, 00:22:44.733 "compare": false, 00:22:44.733 "compare_and_write": false, 00:22:44.733 "abort": false, 00:22:44.733 "seek_hole": true, 00:22:44.733 "seek_data": true, 00:22:44.733 "copy": false, 00:22:44.733 "nvme_iov_md": false 00:22:44.733 }, 00:22:44.733 "driver_specific": { 00:22:44.733 "lvol": { 00:22:44.733 "lvol_store_uuid": "70db2f1d-583a-4651-86bc-4e3171117211", 00:22:44.733 "base_bdev": "nvme0n1", 00:22:44.733 "thin_provision": true, 00:22:44.733 "num_allocated_clusters": 0, 00:22:44.733 "snapshot": false, 00:22:44.733 "clone": false, 00:22:44.733 "esnap_clone": false 00:22:44.733 } 00:22:44.733 } 00:22:44.733 } 00:22:44.733 ]' 00:22:44.733 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:44.733 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:44.733 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:44.733 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:44.733 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:44.733 12:33:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:44.733 12:33:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:44.733 12:33:07 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 61cc76f7-59fc-4acd-94b4-23eb2adedcdc --l2p_dram_limit 10' 00:22:44.733 12:33:07 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:44.733 12:33:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:44.733 12:33:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:44.733 12:33:07 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:44.733 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:44.733 12:33:07 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 61cc76f7-59fc-4acd-94b4-23eb2adedcdc --l2p_dram_limit 10 -c nvc0n1p0 00:22:44.992 [2024-10-07 12:33:08.083289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.083363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:44.992 [2024-10-07 12:33:08.083387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:44.992 [2024-10-07 12:33:08.083400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.083471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.083485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.992 [2024-10-07 12:33:08.083501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:44.992 [2024-10-07 12:33:08.083513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.083553] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:44.992 [2024-10-07 12:33:08.084564] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:44.992 [2024-10-07 12:33:08.084606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.084620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.992 [2024-10-07 12:33:08.084639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:22:44.992 [2024-10-07 12:33:08.084655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.084746] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 639eeb8e-3c5a-4629-a801-135403ce4b53 00:22:44.992 [2024-10-07 12:33:08.086286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.086332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:44.992 [2024-10-07 12:33:08.086348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:44.992 [2024-10-07 12:33:08.086364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.094068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.094116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.992 [2024-10-07 12:33:08.094132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.641 ms 00:22:44.992 [2024-10-07 12:33:08.094147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.094262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.094282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.992 [2024-10-07 12:33:08.094296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:44.992 [2024-10-07 12:33:08.094321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.094383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.094400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:44.992 [2024-10-07 12:33:08.094413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:44.992 [2024-10-07 12:33:08.094428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.094458] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:44.992 [2024-10-07 12:33:08.099633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.099672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.992 [2024-10-07 12:33:08.099692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:22:44.992 [2024-10-07 12:33:08.099705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.099749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.099762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:44.992 [2024-10-07 12:33:08.099778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:44.992 [2024-10-07 12:33:08.099793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.099857] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:44.992 [2024-10-07 12:33:08.100023] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:44.992 [2024-10-07 12:33:08.100048] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:44.992 [2024-10-07 12:33:08.100064] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:44.992 [2024-10-07 12:33:08.100087] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:44.992 [2024-10-07 12:33:08.100101] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:44.992 [2024-10-07 12:33:08.100117] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:44.992 [2024-10-07 12:33:08.100130] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:44.992 [2024-10-07 12:33:08.100145] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:44.992 [2024-10-07 12:33:08.100157] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:44.992 [2024-10-07 12:33:08.100173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.100198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:44.992 [2024-10-07 12:33:08.100214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:22:44.992 [2024-10-07 12:33:08.100227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.100310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.992 [2024-10-07 12:33:08.100327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:44.992 [2024-10-07 12:33:08.100342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:44.992 [2024-10-07 12:33:08.100354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.992 [2024-10-07 12:33:08.100467] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:44.992 [2024-10-07 12:33:08.100481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:44.992 [2024-10-07 12:33:08.100497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:44.992 [2024-10-07 12:33:08.100510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.992 [2024-10-07 12:33:08.100525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:44.992 [2024-10-07 12:33:08.100536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:44.992 [2024-10-07 12:33:08.100551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:44.992 [2024-10-07 12:33:08.100562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:44.993 [2024-10-07 12:33:08.100576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:44.993 [2024-10-07 12:33:08.100588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:44.993 [2024-10-07 12:33:08.100603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:44.993 [2024-10-07 12:33:08.100615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:44.993 [2024-10-07 12:33:08.100629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:44.993 [2024-10-07 12:33:08.100640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:44.993 [2024-10-07 12:33:08.100654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:44.993 [2024-10-07 12:33:08.100667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.993 [2024-10-07 12:33:08.100685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:44.993 [2024-10-07 12:33:08.100696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:44.993 [2024-10-07 12:33:08.100710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.993 [2024-10-07 12:33:08.100721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:44.993 [2024-10-07 12:33:08.100737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:44.993 [2024-10-07 12:33:08.100748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.993 [2024-10-07 12:33:08.100762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:44.993 [2024-10-07 12:33:08.100774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:44.993 [2024-10-07 12:33:08.100787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.993 [2024-10-07 12:33:08.100799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:44.993 [2024-10-07 12:33:08.100813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:44.993 [2024-10-07 12:33:08.100824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.993 [2024-10-07 12:33:08.100838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:44.993 [2024-10-07 12:33:08.100850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:44.993 [2024-10-07 12:33:08.100863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.993 [2024-10-07 12:33:08.100875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:44.993 [2024-10-07 12:33:08.100891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:44.993 [2024-10-07 12:33:08.100915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:44.993 [2024-10-07 12:33:08.100930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:44.993 [2024-10-07 12:33:08.100941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:44.993 [2024-10-07 12:33:08.100955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:44.993 [2024-10-07 12:33:08.100967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:44.993 [2024-10-07 12:33:08.100981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:44.993 [2024-10-07 12:33:08.100992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.993 [2024-10-07 12:33:08.101006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:44.993 [2024-10-07 12:33:08.101017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:44.993 [2024-10-07 12:33:08.101032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.993 [2024-10-07 12:33:08.101043] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:44.993 [2024-10-07 12:33:08.101058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:44.993 [2024-10-07 12:33:08.101073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:44.993 [2024-10-07 12:33:08.101090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.993 [2024-10-07 12:33:08.101104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:44.993 [2024-10-07 12:33:08.101121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:44.993 [2024-10-07 12:33:08.101133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:44.993 [2024-10-07 12:33:08.101147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:44.993 [2024-10-07 12:33:08.101157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:44.993 [2024-10-07 12:33:08.101172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:44.993 [2024-10-07 12:33:08.101189] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:44.993 [2024-10-07 12:33:08.101206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:44.993 [2024-10-07 12:33:08.101220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:44.993 [2024-10-07 12:33:08.101235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:44.993 [2024-10-07 12:33:08.101248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:44.993 [2024-10-07 12:33:08.101263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:44.993 [2024-10-07 12:33:08.101275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:44.993 [2024-10-07 12:33:08.101290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:44.993 [2024-10-07 12:33:08.101303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:44.993 [2024-10-07 12:33:08.101318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:44.993 [2024-10-07 12:33:08.101330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:44.993 [2024-10-07 12:33:08.101349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:44.993 [2024-10-07 12:33:08.101361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:44.993 [2024-10-07 12:33:08.101375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:44.993 [2024-10-07 12:33:08.101387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:44.993 [2024-10-07 12:33:08.101402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:44.993 [2024-10-07 12:33:08.101414] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:44.993 [2024-10-07 12:33:08.101432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:44.993 [2024-10-07 12:33:08.101445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:44.993 [2024-10-07 12:33:08.101461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:44.993 [2024-10-07 12:33:08.101474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:44.993 [2024-10-07 12:33:08.101488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:44.993 [2024-10-07 12:33:08.101501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.993 [2024-10-07 12:33:08.101516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:44.993 [2024-10-07 12:33:08.101529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:22:44.993 [2024-10-07 12:33:08.101544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.993 [2024-10-07 12:33:08.101595] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:44.993 [2024-10-07 12:33:08.101616] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:49.179 [2024-10-07 12:33:12.139468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.139553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:49.179 [2024-10-07 12:33:12.139573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4044.427 ms 00:22:49.179 [2024-10-07 12:33:12.139589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.177352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.177675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:49.179 [2024-10-07 12:33:12.177706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.472 ms 00:22:49.179 [2024-10-07 12:33:12.177723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.177919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.178042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:49.179 [2024-10-07 12:33:12.178056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:49.179 [2024-10-07 12:33:12.178079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.235439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.235514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:49.179 [2024-10-07 12:33:12.235540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.393 ms 00:22:49.179 [2024-10-07 12:33:12.235561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.235623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.235643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:49.179 [2024-10-07 12:33:12.235660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:49.179 [2024-10-07 12:33:12.235696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.236294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.236321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:49.179 [2024-10-07 12:33:12.236338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:22:49.179 [2024-10-07 12:33:12.236361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.236495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.236522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:49.179 [2024-10-07 12:33:12.236538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:49.179 [2024-10-07 12:33:12.236560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.256737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.256820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:49.179 [2024-10-07 12:33:12.256838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.178 ms 00:22:49.179 [2024-10-07 12:33:12.256867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.270149] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:49.179 [2024-10-07 12:33:12.273690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.273733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:49.179 [2024-10-07 12:33:12.273760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.687 ms 00:22:49.179 [2024-10-07 12:33:12.273774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.377201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.377275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:49.179 [2024-10-07 12:33:12.377304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.531 ms 00:22:49.179 [2024-10-07 12:33:12.377317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.377557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.377576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:49.179 [2024-10-07 12:33:12.377597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:22:49.179 [2024-10-07 12:33:12.377609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.416677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.416914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:49.179 [2024-10-07 12:33:12.416951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.044 ms 00:22:49.179 [2024-10-07 12:33:12.416965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.454932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.455189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:49.179 [2024-10-07 12:33:12.455227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.924 ms 00:22:49.179 [2024-10-07 12:33:12.455241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.179 [2024-10-07 12:33:12.456056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.179 [2024-10-07 12:33:12.456082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:49.179 [2024-10-07 12:33:12.456100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:22:49.179 [2024-10-07 12:33:12.456112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.437 [2024-10-07 12:33:12.567920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.437 [2024-10-07 12:33:12.568017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:49.437 [2024-10-07 12:33:12.568047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.885 ms 00:22:49.437 [2024-10-07 12:33:12.568065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.437 [2024-10-07 12:33:12.608373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.437 [2024-10-07 12:33:12.608454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:49.437 [2024-10-07 12:33:12.608494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.225 ms 00:22:49.437 [2024-10-07 12:33:12.608508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.437 [2024-10-07 12:33:12.647231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.437 [2024-10-07 12:33:12.647297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:49.437 [2024-10-07 12:33:12.647336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.698 ms 00:22:49.437 [2024-10-07 12:33:12.647349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.437 [2024-10-07 12:33:12.685712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.437 [2024-10-07 12:33:12.685772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:49.437 [2024-10-07 12:33:12.685810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.353 ms 00:22:49.438 [2024-10-07 12:33:12.685823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.438 [2024-10-07 12:33:12.685889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.438 [2024-10-07 12:33:12.685921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:49.438 [2024-10-07 12:33:12.685947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:49.438 [2024-10-07 12:33:12.685959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.438 [2024-10-07 12:33:12.686098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.438 [2024-10-07 12:33:12.686113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:49.438 [2024-10-07 12:33:12.686129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:49.438 [2024-10-07 12:33:12.686141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.438 [2024-10-07 12:33:12.687406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4611.045 ms, result 0 00:22:49.438 { 00:22:49.438 "name": "ftl0", 00:22:49.438 "uuid": "639eeb8e-3c5a-4629-a801-135403ce4b53" 00:22:49.438 } 00:22:49.438 12:33:12 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:49.438 12:33:12 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:49.695 12:33:12 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:49.695 12:33:12 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:49.954 [2024-10-07 12:33:13.089847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.954 [2024-10-07 12:33:13.089951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:49.954 [2024-10-07 12:33:13.089972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:49.954 [2024-10-07 12:33:13.089988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.954 [2024-10-07 12:33:13.090021] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:49.954 [2024-10-07 12:33:13.094392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.954 [2024-10-07 12:33:13.094434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:49.954 [2024-10-07 12:33:13.094488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.350 ms 00:22:49.954 [2024-10-07 12:33:13.094500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.954 [2024-10-07 12:33:13.094775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.954 [2024-10-07 12:33:13.094791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:49.954 [2024-10-07 12:33:13.094808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:22:49.954 [2024-10-07 12:33:13.094821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.954 [2024-10-07 12:33:13.097357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.954 [2024-10-07 12:33:13.097505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:49.954 [2024-10-07 12:33:13.097535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:22:49.954 [2024-10-07 12:33:13.097551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.954 [2024-10-07 12:33:13.102616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.954 [2024-10-07 12:33:13.102659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:49.954 [2024-10-07 12:33:13.102677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.032 ms 00:22:49.954 [2024-10-07 12:33:13.102706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.954 [2024-10-07 12:33:13.140789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.954 [2024-10-07 12:33:13.140843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:49.954 [2024-10-07 12:33:13.140882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.052 ms 00:22:49.954 [2024-10-07 12:33:13.140895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.954 [2024-10-07 12:33:13.163548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.954 [2024-10-07 12:33:13.163824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:49.954 [2024-10-07 12:33:13.163858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.613 ms 00:22:49.954 [2024-10-07 12:33:13.163872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.954 [2024-10-07 12:33:13.164116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.954 [2024-10-07 12:33:13.164137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:49.954 [2024-10-07 12:33:13.164154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:22:49.954 [2024-10-07 12:33:13.164167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.954 [2024-10-07 12:33:13.201470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.954 [2024-10-07 12:33:13.201520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:49.954 [2024-10-07 12:33:13.201558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.334 ms 00:22:49.954 [2024-10-07 12:33:13.201571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.954 [2024-10-07 12:33:13.238161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.954 [2024-10-07 12:33:13.238211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:49.954 [2024-10-07 12:33:13.238250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.589 ms 00:22:49.954 [2024-10-07 12:33:13.238262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.213 [2024-10-07 12:33:13.274063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.213 [2024-10-07 12:33:13.274111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:50.213 [2024-10-07 12:33:13.274147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.795 ms 00:22:50.213 [2024-10-07 12:33:13.274159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.213 [2024-10-07 12:33:13.310479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.213 [2024-10-07 12:33:13.310527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:50.213 [2024-10-07 12:33:13.310564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.256 ms 00:22:50.213 [2024-10-07 12:33:13.310577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.213 [2024-10-07 12:33:13.310631] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:50.213 [2024-10-07 12:33:13.310650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.310992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:50.213 [2024-10-07 12:33:13.311563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.311987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:50.214 [2024-10-07 12:33:13.312204] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:50.214 [2024-10-07 12:33:13.312224] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 639eeb8e-3c5a-4629-a801-135403ce4b53 00:22:50.214 [2024-10-07 12:33:13.312238] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:50.214 [2024-10-07 12:33:13.312256] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:50.214 [2024-10-07 12:33:13.312268] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:50.214 [2024-10-07 12:33:13.312284] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:50.214 [2024-10-07 12:33:13.312295] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:50.214 [2024-10-07 12:33:13.312311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:50.214 [2024-10-07 12:33:13.312327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:50.214 [2024-10-07 12:33:13.312341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:50.214 [2024-10-07 12:33:13.312352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:50.214 [2024-10-07 12:33:13.312368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.214 [2024-10-07 12:33:13.312380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:50.214 [2024-10-07 12:33:13.312396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:22:50.214 [2024-10-07 12:33:13.312409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.214 [2024-10-07 12:33:13.333124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.214 [2024-10-07 12:33:13.333169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:50.214 [2024-10-07 12:33:13.333204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.678 ms 00:22:50.214 [2024-10-07 12:33:13.333217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.214 [2024-10-07 12:33:13.333770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.214 [2024-10-07 12:33:13.333791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:50.214 [2024-10-07 12:33:13.333808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:22:50.214 [2024-10-07 12:33:13.333820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.214 [2024-10-07 12:33:13.392692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.214 [2024-10-07 12:33:13.392746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:50.214 [2024-10-07 12:33:13.392767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.214 [2024-10-07 12:33:13.392783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.214 [2024-10-07 12:33:13.392862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.214 [2024-10-07 12:33:13.392876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:50.214 [2024-10-07 12:33:13.392892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.214 [2024-10-07 12:33:13.392925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.214 [2024-10-07 12:33:13.393079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.214 [2024-10-07 12:33:13.393096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:50.214 [2024-10-07 12:33:13.393112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.214 [2024-10-07 12:33:13.393124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.214 [2024-10-07 12:33:13.393160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.214 [2024-10-07 12:33:13.393172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:50.214 [2024-10-07 12:33:13.393188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.214 [2024-10-07 12:33:13.393200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.472 [2024-10-07 12:33:13.517886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.472 [2024-10-07 12:33:13.517956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:50.472 [2024-10-07 12:33:13.517978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.472 [2024-10-07 12:33:13.517991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.472 [2024-10-07 12:33:13.621449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.472 [2024-10-07 12:33:13.621515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:50.472 [2024-10-07 12:33:13.621554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.472 [2024-10-07 12:33:13.621566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.472 [2024-10-07 12:33:13.621705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.472 [2024-10-07 12:33:13.621719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.472 [2024-10-07 12:33:13.621735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.472 [2024-10-07 12:33:13.621747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.472 [2024-10-07 12:33:13.621824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.473 [2024-10-07 12:33:13.621841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.473 [2024-10-07 12:33:13.621857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.473 [2024-10-07 12:33:13.621869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.473 [2024-10-07 12:33:13.622038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.473 [2024-10-07 12:33:13.622055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.473 [2024-10-07 12:33:13.622071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.473 [2024-10-07 12:33:13.622083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.473 [2024-10-07 12:33:13.622140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.473 [2024-10-07 12:33:13.622155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:50.473 [2024-10-07 12:33:13.622174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.473 [2024-10-07 12:33:13.622186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.473 [2024-10-07 12:33:13.622233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.473 [2024-10-07 12:33:13.622246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.473 [2024-10-07 12:33:13.622261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.473 [2024-10-07 12:33:13.622273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.473 [2024-10-07 12:33:13.622328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:50.473 [2024-10-07 12:33:13.622345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.473 [2024-10-07 12:33:13.622360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:50.473 [2024-10-07 12:33:13.622371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.473 [2024-10-07 12:33:13.622515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.503 ms, result 0 00:22:50.473 true 00:22:50.473 12:33:13 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76760 00:22:50.473 12:33:13 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76760 ']' 00:22:50.473 12:33:13 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76760 00:22:50.473 12:33:13 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:22:50.473 12:33:13 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:50.473 12:33:13 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76760 00:22:50.473 12:33:13 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:50.473 killing process with pid 76760 00:22:50.473 12:33:13 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:50.473 12:33:13 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76760' 00:22:50.473 12:33:13 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 76760 00:22:50.473 12:33:13 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 76760 00:22:53.754 12:33:16 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:57.942 262144+0 records in 00:22:57.942 262144+0 records out 00:22:57.942 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.15879 s, 258 MB/s 00:22:57.942 12:33:20 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:59.318 12:33:22 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:59.576 [2024-10-07 12:33:22.634506] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:22:59.576 [2024-10-07 12:33:22.634654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77007 ] 00:22:59.576 [2024-10-07 12:33:22.814583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.834 [2024-10-07 12:33:23.036821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.401 [2024-10-07 12:33:23.393807] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:00.401 [2024-10-07 12:33:23.393893] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:00.401 [2024-10-07 12:33:23.563790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.563850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:00.401 [2024-10-07 12:33:23.563866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:00.401 [2024-10-07 12:33:23.563883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.563954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.563967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:00.401 [2024-10-07 12:33:23.563978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:00.401 [2024-10-07 12:33:23.563989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.564012] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:00.401 [2024-10-07 12:33:23.565027] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:00.401 [2024-10-07 12:33:23.565057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.565068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:00.401 [2024-10-07 12:33:23.565079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:23:00.401 [2024-10-07 12:33:23.565089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.566561] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:00.401 [2024-10-07 12:33:23.586549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.586606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:00.401 [2024-10-07 12:33:23.586639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.020 ms 00:23:00.401 [2024-10-07 12:33:23.586649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.586747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.586760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:00.401 [2024-10-07 12:33:23.586771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:00.401 [2024-10-07 12:33:23.586781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.593943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.593987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:00.401 [2024-10-07 12:33:23.594000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.086 ms 00:23:00.401 [2024-10-07 12:33:23.594010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.594110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.594123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:00.401 [2024-10-07 12:33:23.594134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:00.401 [2024-10-07 12:33:23.594143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.594194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.594206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:00.401 [2024-10-07 12:33:23.594217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:00.401 [2024-10-07 12:33:23.594227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.594253] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:00.401 [2024-10-07 12:33:23.599158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.599194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:00.401 [2024-10-07 12:33:23.599206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.919 ms 00:23:00.401 [2024-10-07 12:33:23.599216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.599249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.599260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:00.401 [2024-10-07 12:33:23.599270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:00.401 [2024-10-07 12:33:23.599280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.599340] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:00.401 [2024-10-07 12:33:23.599363] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:00.401 [2024-10-07 12:33:23.599398] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:00.401 [2024-10-07 12:33:23.599415] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:00.401 [2024-10-07 12:33:23.599504] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:00.401 [2024-10-07 12:33:23.599517] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:00.401 [2024-10-07 12:33:23.599530] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:00.401 [2024-10-07 12:33:23.599546] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:00.401 [2024-10-07 12:33:23.599559] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:00.401 [2024-10-07 12:33:23.599569] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:00.401 [2024-10-07 12:33:23.599580] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:00.401 [2024-10-07 12:33:23.599589] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:00.401 [2024-10-07 12:33:23.599599] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:00.401 [2024-10-07 12:33:23.599609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.599619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:00.401 [2024-10-07 12:33:23.599629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:23:00.401 [2024-10-07 12:33:23.599640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.599714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.401 [2024-10-07 12:33:23.599728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:00.401 [2024-10-07 12:33:23.599739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:00.401 [2024-10-07 12:33:23.599748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.401 [2024-10-07 12:33:23.599843] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:00.401 [2024-10-07 12:33:23.599877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:00.401 [2024-10-07 12:33:23.599888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.401 [2024-10-07 12:33:23.599919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.401 [2024-10-07 12:33:23.599931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:00.401 [2024-10-07 12:33:23.599941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:00.401 [2024-10-07 12:33:23.599950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:00.402 [2024-10-07 12:33:23.599960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:00.402 [2024-10-07 12:33:23.599970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:00.402 [2024-10-07 12:33:23.599979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.402 [2024-10-07 12:33:23.599989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:00.402 [2024-10-07 12:33:23.599998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:00.402 [2024-10-07 12:33:23.600007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.402 [2024-10-07 12:33:23.600026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:00.402 [2024-10-07 12:33:23.600036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:00.402 [2024-10-07 12:33:23.600045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.402 [2024-10-07 12:33:23.600055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:00.402 [2024-10-07 12:33:23.600064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:00.402 [2024-10-07 12:33:23.600073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.402 [2024-10-07 12:33:23.600082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:00.402 [2024-10-07 12:33:23.600091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:00.402 [2024-10-07 12:33:23.600102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.402 [2024-10-07 12:33:23.600111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:00.402 [2024-10-07 12:33:23.600120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:00.402 [2024-10-07 12:33:23.600129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.402 [2024-10-07 12:33:23.600138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:00.402 [2024-10-07 12:33:23.600147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:00.402 [2024-10-07 12:33:23.600156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.402 [2024-10-07 12:33:23.600165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:00.402 [2024-10-07 12:33:23.600174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:00.402 [2024-10-07 12:33:23.600183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.402 [2024-10-07 12:33:23.600192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:00.402 [2024-10-07 12:33:23.600201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:00.402 [2024-10-07 12:33:23.600209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.402 [2024-10-07 12:33:23.600218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:00.402 [2024-10-07 12:33:23.600227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:00.402 [2024-10-07 12:33:23.600236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.402 [2024-10-07 12:33:23.600245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:00.402 [2024-10-07 12:33:23.600254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:00.402 [2024-10-07 12:33:23.600263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.402 [2024-10-07 12:33:23.600273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:00.402 [2024-10-07 12:33:23.600282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:00.402 [2024-10-07 12:33:23.600291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.402 [2024-10-07 12:33:23.600300] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:00.402 [2024-10-07 12:33:23.600310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:00.402 [2024-10-07 12:33:23.600323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.402 [2024-10-07 12:33:23.600332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.402 [2024-10-07 12:33:23.600343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:00.402 [2024-10-07 12:33:23.600352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:00.402 [2024-10-07 12:33:23.600362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:00.402 [2024-10-07 12:33:23.600371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:00.402 [2024-10-07 12:33:23.600379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:00.402 [2024-10-07 12:33:23.600389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:00.402 [2024-10-07 12:33:23.600399] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:00.402 [2024-10-07 12:33:23.600411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.402 [2024-10-07 12:33:23.600422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:00.402 [2024-10-07 12:33:23.600432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:00.402 [2024-10-07 12:33:23.600443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:00.402 [2024-10-07 12:33:23.600453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:00.402 [2024-10-07 12:33:23.600463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:00.402 [2024-10-07 12:33:23.600474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:00.402 [2024-10-07 12:33:23.600483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:00.402 [2024-10-07 12:33:23.600494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:00.402 [2024-10-07 12:33:23.600504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:00.402 [2024-10-07 12:33:23.600515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:00.402 [2024-10-07 12:33:23.600525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:00.402 [2024-10-07 12:33:23.600535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:00.402 [2024-10-07 12:33:23.600545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:00.402 [2024-10-07 12:33:23.600555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:00.402 [2024-10-07 12:33:23.600565] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:00.402 [2024-10-07 12:33:23.600576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.402 [2024-10-07 12:33:23.600587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:00.402 [2024-10-07 12:33:23.600597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:00.402 [2024-10-07 12:33:23.600607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:00.402 [2024-10-07 12:33:23.600618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:00.402 [2024-10-07 12:33:23.600629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.402 [2024-10-07 12:33:23.600639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:00.402 [2024-10-07 12:33:23.600649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:23:00.402 [2024-10-07 12:33:23.600659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.402 [2024-10-07 12:33:23.645552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.402 [2024-10-07 12:33:23.645610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:00.402 [2024-10-07 12:33:23.645643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.911 ms 00:23:00.402 [2024-10-07 12:33:23.645654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.402 [2024-10-07 12:33:23.645758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.402 [2024-10-07 12:33:23.645770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:00.402 [2024-10-07 12:33:23.645780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:00.402 [2024-10-07 12:33:23.645791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.693644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.693719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:00.661 [2024-10-07 12:33:23.693739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.856 ms 00:23:00.661 [2024-10-07 12:33:23.693750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.693805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.693816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:00.661 [2024-10-07 12:33:23.693827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:00.661 [2024-10-07 12:33:23.693839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.694361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.694384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:00.661 [2024-10-07 12:33:23.694396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:23:00.661 [2024-10-07 12:33:23.694416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.694542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.694556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:00.661 [2024-10-07 12:33:23.694568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:00.661 [2024-10-07 12:33:23.694578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.715146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.715201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:00.661 [2024-10-07 12:33:23.715216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.579 ms 00:23:00.661 [2024-10-07 12:33:23.715227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.735009] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:00.661 [2024-10-07 12:33:23.735060] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:00.661 [2024-10-07 12:33:23.735077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.735088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:00.661 [2024-10-07 12:33:23.735100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.740 ms 00:23:00.661 [2024-10-07 12:33:23.735110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.765082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.765143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:00.661 [2024-10-07 12:33:23.765175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.972 ms 00:23:00.661 [2024-10-07 12:33:23.765186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.783746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.783798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:00.661 [2024-10-07 12:33:23.783829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.518 ms 00:23:00.661 [2024-10-07 12:33:23.783839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.802617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.802681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:00.661 [2024-10-07 12:33:23.802712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.758 ms 00:23:00.661 [2024-10-07 12:33:23.802723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.803610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.803644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:00.661 [2024-10-07 12:33:23.803657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:23:00.661 [2024-10-07 12:33:23.803668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.891663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.661 [2024-10-07 12:33:23.891740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:00.661 [2024-10-07 12:33:23.891757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.109 ms 00:23:00.661 [2024-10-07 12:33:23.891768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.661 [2024-10-07 12:33:23.903378] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:00.661 [2024-10-07 12:33:23.906587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.662 [2024-10-07 12:33:23.906624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:00.662 [2024-10-07 12:33:23.906656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.757 ms 00:23:00.662 [2024-10-07 12:33:23.906667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.662 [2024-10-07 12:33:23.906777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.662 [2024-10-07 12:33:23.906790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:00.662 [2024-10-07 12:33:23.906801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:00.662 [2024-10-07 12:33:23.906811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.662 [2024-10-07 12:33:23.906902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.662 [2024-10-07 12:33:23.906925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:00.662 [2024-10-07 12:33:23.906936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:00.662 [2024-10-07 12:33:23.906946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.662 [2024-10-07 12:33:23.906972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.662 [2024-10-07 12:33:23.906995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:00.662 [2024-10-07 12:33:23.907005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:00.662 [2024-10-07 12:33:23.907015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.662 [2024-10-07 12:33:23.907046] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:00.662 [2024-10-07 12:33:23.907059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.662 [2024-10-07 12:33:23.907069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:00.662 [2024-10-07 12:33:23.907078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:00.662 [2024-10-07 12:33:23.907092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.662 [2024-10-07 12:33:23.944550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.662 [2024-10-07 12:33:23.944622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:00.662 [2024-10-07 12:33:23.944641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.493 ms 00:23:00.662 [2024-10-07 12:33:23.944652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.662 [2024-10-07 12:33:23.944764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.662 [2024-10-07 12:33:23.944777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:00.662 [2024-10-07 12:33:23.944789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:00.662 [2024-10-07 12:33:23.944799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.662 [2024-10-07 12:33:23.946030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 382.382 ms, result 0 00:23:02.035  [2024-10-07T12:33:26.260Z] Copying: 22/1024 [MB] (22 MBps) [2024-10-07T12:33:27.193Z] Copying: 45/1024 [MB] (22 MBps) [2024-10-07T12:33:28.126Z] Copying: 68/1024 [MB] (23 MBps) [2024-10-07T12:33:29.059Z] Copying: 92/1024 [MB] (24 MBps) [2024-10-07T12:33:29.993Z] Copying: 115/1024 [MB] (23 MBps) [2024-10-07T12:33:31.367Z] Copying: 139/1024 [MB] (23 MBps) [2024-10-07T12:33:32.300Z] Copying: 163/1024 [MB] (23 MBps) [2024-10-07T12:33:33.233Z] Copying: 186/1024 [MB] (23 MBps) [2024-10-07T12:33:34.167Z] Copying: 210/1024 [MB] (23 MBps) [2024-10-07T12:33:35.100Z] Copying: 233/1024 [MB] (23 MBps) [2024-10-07T12:33:36.034Z] Copying: 258/1024 [MB] (24 MBps) [2024-10-07T12:33:36.967Z] Copying: 281/1024 [MB] (23 MBps) [2024-10-07T12:33:38.341Z] Copying: 305/1024 [MB] (23 MBps) [2024-10-07T12:33:39.282Z] Copying: 328/1024 [MB] (23 MBps) [2024-10-07T12:33:40.226Z] Copying: 352/1024 [MB] (23 MBps) [2024-10-07T12:33:41.159Z] Copying: 375/1024 [MB] (23 MBps) [2024-10-07T12:33:42.091Z] Copying: 400/1024 [MB] (24 MBps) [2024-10-07T12:33:43.022Z] Copying: 424/1024 [MB] (24 MBps) [2024-10-07T12:33:43.954Z] Copying: 449/1024 [MB] (24 MBps) [2024-10-07T12:33:45.326Z] Copying: 473/1024 [MB] (24 MBps) [2024-10-07T12:33:46.260Z] Copying: 497/1024 [MB] (24 MBps) [2024-10-07T12:33:47.193Z] Copying: 522/1024 [MB] (24 MBps) [2024-10-07T12:33:48.135Z] Copying: 547/1024 [MB] (24 MBps) [2024-10-07T12:33:49.070Z] Copying: 572/1024 [MB] (25 MBps) [2024-10-07T12:33:50.006Z] Copying: 598/1024 [MB] (25 MBps) [2024-10-07T12:33:50.939Z] Copying: 623/1024 [MB] (25 MBps) [2024-10-07T12:33:52.313Z] Copying: 648/1024 [MB] (25 MBps) [2024-10-07T12:33:53.286Z] Copying: 674/1024 [MB] (25 MBps) [2024-10-07T12:33:54.223Z] Copying: 698/1024 [MB] (23 MBps) [2024-10-07T12:33:55.158Z] Copying: 724/1024 [MB] (25 MBps) [2024-10-07T12:33:56.094Z] Copying: 749/1024 [MB] (25 MBps) [2024-10-07T12:33:57.031Z] Copying: 774/1024 [MB] (25 MBps) [2024-10-07T12:33:57.966Z] Copying: 799/1024 [MB] (24 MBps) [2024-10-07T12:33:58.904Z] Copying: 824/1024 [MB] (25 MBps) [2024-10-07T12:34:00.280Z] Copying: 850/1024 [MB] (25 MBps) [2024-10-07T12:34:01.216Z] Copying: 875/1024 [MB] (25 MBps) [2024-10-07T12:34:02.152Z] Copying: 901/1024 [MB] (25 MBps) [2024-10-07T12:34:03.087Z] Copying: 926/1024 [MB] (25 MBps) [2024-10-07T12:34:04.024Z] Copying: 951/1024 [MB] (24 MBps) [2024-10-07T12:34:04.959Z] Copying: 976/1024 [MB] (25 MBps) [2024-10-07T12:34:05.894Z] Copying: 1002/1024 [MB] (25 MBps) [2024-10-07T12:34:05.894Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-10-07 12:34:05.757869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.603 [2024-10-07 12:34:05.757943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:42.603 [2024-10-07 12:34:05.757960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:42.603 [2024-10-07 12:34:05.757971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.603 [2024-10-07 12:34:05.758002] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:42.603 [2024-10-07 12:34:05.762287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.603 [2024-10-07 12:34:05.762321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:42.603 [2024-10-07 12:34:05.762334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.274 ms 00:23:42.603 [2024-10-07 12:34:05.762344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.603 [2024-10-07 12:34:05.764271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.603 [2024-10-07 12:34:05.764309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:42.603 [2024-10-07 12:34:05.764322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.905 ms 00:23:42.603 [2024-10-07 12:34:05.764332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.603 [2024-10-07 12:34:05.781607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.603 [2024-10-07 12:34:05.781666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:42.603 [2024-10-07 12:34:05.781680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.284 ms 00:23:42.603 [2024-10-07 12:34:05.781691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.603 [2024-10-07 12:34:05.786672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.603 [2024-10-07 12:34:05.786704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:42.603 [2024-10-07 12:34:05.786716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.954 ms 00:23:42.603 [2024-10-07 12:34:05.786726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.603 [2024-10-07 12:34:05.823854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.603 [2024-10-07 12:34:05.823920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:42.603 [2024-10-07 12:34:05.823935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.113 ms 00:23:42.603 [2024-10-07 12:34:05.823945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.603 [2024-10-07 12:34:05.844745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.603 [2024-10-07 12:34:05.844786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:42.603 [2024-10-07 12:34:05.844811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.788 ms 00:23:42.603 [2024-10-07 12:34:05.844821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.603 [2024-10-07 12:34:05.844995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.603 [2024-10-07 12:34:05.845009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:42.603 [2024-10-07 12:34:05.845020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:23:42.604 [2024-10-07 12:34:05.845030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.604 [2024-10-07 12:34:05.882834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.604 [2024-10-07 12:34:05.882872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:42.604 [2024-10-07 12:34:05.882906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.848 ms 00:23:42.604 [2024-10-07 12:34:05.882917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.863 [2024-10-07 12:34:05.919457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.863 [2024-10-07 12:34:05.919499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:42.863 [2024-10-07 12:34:05.919514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.556 ms 00:23:42.863 [2024-10-07 12:34:05.919524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.863 [2024-10-07 12:34:05.955424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.863 [2024-10-07 12:34:05.955468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:42.863 [2024-10-07 12:34:05.955483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.917 ms 00:23:42.863 [2024-10-07 12:34:05.955493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.863 [2024-10-07 12:34:05.991494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.863 [2024-10-07 12:34:05.991537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:42.864 [2024-10-07 12:34:05.991552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.950 ms 00:23:42.864 [2024-10-07 12:34:05.991562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.864 [2024-10-07 12:34:05.991607] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:42.864 [2024-10-07 12:34:05.991625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.991994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:42.864 [2024-10-07 12:34:05.992523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:42.865 [2024-10-07 12:34:05.992685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:42.865 [2024-10-07 12:34:05.992695] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 639eeb8e-3c5a-4629-a801-135403ce4b53 00:23:42.865 [2024-10-07 12:34:05.992706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:42.865 [2024-10-07 12:34:05.992715] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:42.865 [2024-10-07 12:34:05.992725] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:42.865 [2024-10-07 12:34:05.992735] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:42.865 [2024-10-07 12:34:05.992745] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:42.865 [2024-10-07 12:34:05.992755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:42.865 [2024-10-07 12:34:05.992778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:42.865 [2024-10-07 12:34:05.992787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:42.865 [2024-10-07 12:34:05.992796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:42.865 [2024-10-07 12:34:05.992805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.865 [2024-10-07 12:34:05.992816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:42.865 [2024-10-07 12:34:05.992845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:23:42.865 [2024-10-07 12:34:05.992855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.865 [2024-10-07 12:34:06.012802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.865 [2024-10-07 12:34:06.012843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:42.865 [2024-10-07 12:34:06.012857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.938 ms 00:23:42.865 [2024-10-07 12:34:06.012868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.865 [2024-10-07 12:34:06.013406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.865 [2024-10-07 12:34:06.013424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:42.865 [2024-10-07 12:34:06.013435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:23:42.865 [2024-10-07 12:34:06.013445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.865 [2024-10-07 12:34:06.057883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.865 [2024-10-07 12:34:06.057937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:42.865 [2024-10-07 12:34:06.057952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.865 [2024-10-07 12:34:06.057974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.865 [2024-10-07 12:34:06.058044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.865 [2024-10-07 12:34:06.058055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:42.865 [2024-10-07 12:34:06.058065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.865 [2024-10-07 12:34:06.058074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.865 [2024-10-07 12:34:06.058176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.865 [2024-10-07 12:34:06.058194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:42.865 [2024-10-07 12:34:06.058205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.865 [2024-10-07 12:34:06.058216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.865 [2024-10-07 12:34:06.058241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.865 [2024-10-07 12:34:06.058252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:42.865 [2024-10-07 12:34:06.058262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.865 [2024-10-07 12:34:06.058273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.124 [2024-10-07 12:34:06.181895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.124 [2024-10-07 12:34:06.181972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:43.124 [2024-10-07 12:34:06.181989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.124 [2024-10-07 12:34:06.182000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.124 [2024-10-07 12:34:06.283370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.124 [2024-10-07 12:34:06.283434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:43.124 [2024-10-07 12:34:06.283450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.124 [2024-10-07 12:34:06.283460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.124 [2024-10-07 12:34:06.283565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.124 [2024-10-07 12:34:06.283578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:43.124 [2024-10-07 12:34:06.283588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.124 [2024-10-07 12:34:06.283598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.124 [2024-10-07 12:34:06.283643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.124 [2024-10-07 12:34:06.283662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:43.124 [2024-10-07 12:34:06.283672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.124 [2024-10-07 12:34:06.283682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.124 [2024-10-07 12:34:06.283784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.124 [2024-10-07 12:34:06.283798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:43.124 [2024-10-07 12:34:06.283808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.124 [2024-10-07 12:34:06.283817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.124 [2024-10-07 12:34:06.283851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.124 [2024-10-07 12:34:06.283862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:43.124 [2024-10-07 12:34:06.283880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.124 [2024-10-07 12:34:06.283890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.124 [2024-10-07 12:34:06.283965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.124 [2024-10-07 12:34:06.283977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:43.124 [2024-10-07 12:34:06.283987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.124 [2024-10-07 12:34:06.283997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.124 [2024-10-07 12:34:06.284038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.124 [2024-10-07 12:34:06.284057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:43.124 [2024-10-07 12:34:06.284067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.124 [2024-10-07 12:34:06.284078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.124 [2024-10-07 12:34:06.284199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.149 ms, result 0 00:23:44.500 00:23:44.500 00:23:44.500 12:34:07 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:44.500 [2024-10-07 12:34:07.753026] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:23:44.500 [2024-10-07 12:34:07.753155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77465 ] 00:23:44.758 [2024-10-07 12:34:07.922154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.016 [2024-10-07 12:34:08.134065] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.281 [2024-10-07 12:34:08.469581] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:45.281 [2024-10-07 12:34:08.469642] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:45.540 [2024-10-07 12:34:08.630319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.630375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:45.540 [2024-10-07 12:34:08.630390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:45.540 [2024-10-07 12:34:08.630405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.630453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.630466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:45.540 [2024-10-07 12:34:08.630476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:45.540 [2024-10-07 12:34:08.630486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.630507] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:45.540 [2024-10-07 12:34:08.631542] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:45.540 [2024-10-07 12:34:08.631575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.631586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:45.540 [2024-10-07 12:34:08.631597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:23:45.540 [2024-10-07 12:34:08.631607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.633080] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:45.540 [2024-10-07 12:34:08.652188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.652226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:45.540 [2024-10-07 12:34:08.652241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.140 ms 00:23:45.540 [2024-10-07 12:34:08.652252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.652315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.652328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:45.540 [2024-10-07 12:34:08.652339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:45.540 [2024-10-07 12:34:08.652348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.659227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.659254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:45.540 [2024-10-07 12:34:08.659266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.801 ms 00:23:45.540 [2024-10-07 12:34:08.659276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.659355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.659369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:45.540 [2024-10-07 12:34:08.659380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:45.540 [2024-10-07 12:34:08.659389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.659435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.659447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:45.540 [2024-10-07 12:34:08.659457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:45.540 [2024-10-07 12:34:08.659467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.659491] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:45.540 [2024-10-07 12:34:08.664329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.664358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:45.540 [2024-10-07 12:34:08.664370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.852 ms 00:23:45.540 [2024-10-07 12:34:08.664380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.664411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.664421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:45.540 [2024-10-07 12:34:08.664432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:45.540 [2024-10-07 12:34:08.664442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.664502] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:45.540 [2024-10-07 12:34:08.664524] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:45.540 [2024-10-07 12:34:08.664559] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:45.540 [2024-10-07 12:34:08.664576] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:45.540 [2024-10-07 12:34:08.664665] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:45.540 [2024-10-07 12:34:08.664691] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:45.540 [2024-10-07 12:34:08.664705] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:45.540 [2024-10-07 12:34:08.664721] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:45.540 [2024-10-07 12:34:08.664733] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:45.540 [2024-10-07 12:34:08.664745] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:45.540 [2024-10-07 12:34:08.664755] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:45.540 [2024-10-07 12:34:08.664765] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:45.540 [2024-10-07 12:34:08.664774] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:45.540 [2024-10-07 12:34:08.664785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.664795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:45.540 [2024-10-07 12:34:08.664805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:23:45.540 [2024-10-07 12:34:08.664815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.664886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.540 [2024-10-07 12:34:08.664913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:45.540 [2024-10-07 12:34:08.664924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:45.540 [2024-10-07 12:34:08.664934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.540 [2024-10-07 12:34:08.665029] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:45.540 [2024-10-07 12:34:08.665044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:45.540 [2024-10-07 12:34:08.665055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:45.540 [2024-10-07 12:34:08.665065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.540 [2024-10-07 12:34:08.665075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:45.540 [2024-10-07 12:34:08.665085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:45.540 [2024-10-07 12:34:08.665094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:45.540 [2024-10-07 12:34:08.665103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:45.540 [2024-10-07 12:34:08.665112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:45.540 [2024-10-07 12:34:08.665121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:45.540 [2024-10-07 12:34:08.665131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:45.540 [2024-10-07 12:34:08.665140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:45.540 [2024-10-07 12:34:08.665151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:45.540 [2024-10-07 12:34:08.665169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:45.540 [2024-10-07 12:34:08.665179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:45.541 [2024-10-07 12:34:08.665188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.541 [2024-10-07 12:34:08.665198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:45.541 [2024-10-07 12:34:08.665207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:45.541 [2024-10-07 12:34:08.665216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.541 [2024-10-07 12:34:08.665227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:45.541 [2024-10-07 12:34:08.665236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:45.541 [2024-10-07 12:34:08.665245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.541 [2024-10-07 12:34:08.665254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:45.541 [2024-10-07 12:34:08.665263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:45.541 [2024-10-07 12:34:08.665272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.541 [2024-10-07 12:34:08.665280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:45.541 [2024-10-07 12:34:08.665289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:45.541 [2024-10-07 12:34:08.665298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.541 [2024-10-07 12:34:08.665307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:45.541 [2024-10-07 12:34:08.665316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:45.541 [2024-10-07 12:34:08.665325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.541 [2024-10-07 12:34:08.665333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:45.541 [2024-10-07 12:34:08.665342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:45.541 [2024-10-07 12:34:08.665351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:45.541 [2024-10-07 12:34:08.665359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:45.541 [2024-10-07 12:34:08.665368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:45.541 [2024-10-07 12:34:08.665377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:45.541 [2024-10-07 12:34:08.665386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:45.541 [2024-10-07 12:34:08.665395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:45.541 [2024-10-07 12:34:08.665404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.541 [2024-10-07 12:34:08.665412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:45.541 [2024-10-07 12:34:08.665421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:45.541 [2024-10-07 12:34:08.665430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.541 [2024-10-07 12:34:08.665439] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:45.541 [2024-10-07 12:34:08.665450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:45.541 [2024-10-07 12:34:08.665463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:45.541 [2024-10-07 12:34:08.665472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.541 [2024-10-07 12:34:08.665482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:45.541 [2024-10-07 12:34:08.665491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:45.541 [2024-10-07 12:34:08.665500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:45.541 [2024-10-07 12:34:08.665510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:45.541 [2024-10-07 12:34:08.665519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:45.541 [2024-10-07 12:34:08.665528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:45.541 [2024-10-07 12:34:08.665539] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:45.541 [2024-10-07 12:34:08.665552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:45.541 [2024-10-07 12:34:08.665563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:45.541 [2024-10-07 12:34:08.665573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:45.541 [2024-10-07 12:34:08.665583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:45.541 [2024-10-07 12:34:08.665594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:45.541 [2024-10-07 12:34:08.665604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:45.541 [2024-10-07 12:34:08.665614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:45.541 [2024-10-07 12:34:08.665624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:45.541 [2024-10-07 12:34:08.665634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:45.541 [2024-10-07 12:34:08.665644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:45.541 [2024-10-07 12:34:08.665654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:45.541 [2024-10-07 12:34:08.665664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:45.541 [2024-10-07 12:34:08.665674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:45.541 [2024-10-07 12:34:08.665684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:45.541 [2024-10-07 12:34:08.665694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:45.541 [2024-10-07 12:34:08.665704] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:45.541 [2024-10-07 12:34:08.665715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:45.541 [2024-10-07 12:34:08.665726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:45.541 [2024-10-07 12:34:08.665736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:45.541 [2024-10-07 12:34:08.665746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:45.541 [2024-10-07 12:34:08.665756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:45.541 [2024-10-07 12:34:08.665767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.541 [2024-10-07 12:34:08.665778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:45.541 [2024-10-07 12:34:08.665788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:23:45.541 [2024-10-07 12:34:08.665798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.541 [2024-10-07 12:34:08.713707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.541 [2024-10-07 12:34:08.713753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:45.541 [2024-10-07 12:34:08.713768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.936 ms 00:23:45.541 [2024-10-07 12:34:08.713780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.541 [2024-10-07 12:34:08.713877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.541 [2024-10-07 12:34:08.713888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:45.541 [2024-10-07 12:34:08.713907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:45.541 [2024-10-07 12:34:08.713918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.541 [2024-10-07 12:34:08.755867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.541 [2024-10-07 12:34:08.755920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:45.541 [2024-10-07 12:34:08.755939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.928 ms 00:23:45.541 [2024-10-07 12:34:08.755950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.541 [2024-10-07 12:34:08.756001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.541 [2024-10-07 12:34:08.756012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:45.541 [2024-10-07 12:34:08.756025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:45.541 [2024-10-07 12:34:08.756034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.541 [2024-10-07 12:34:08.756531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.541 [2024-10-07 12:34:08.756551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:45.541 [2024-10-07 12:34:08.756562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:23:45.541 [2024-10-07 12:34:08.756578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.541 [2024-10-07 12:34:08.756699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.541 [2024-10-07 12:34:08.756712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:45.541 [2024-10-07 12:34:08.756723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:23:45.541 [2024-10-07 12:34:08.756733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.541 [2024-10-07 12:34:08.774879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.541 [2024-10-07 12:34:08.774938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:45.541 [2024-10-07 12:34:08.774954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.155 ms 00:23:45.541 [2024-10-07 12:34:08.774965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.541 [2024-10-07 12:34:08.794167] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:45.541 [2024-10-07 12:34:08.794213] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:45.541 [2024-10-07 12:34:08.794229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.541 [2024-10-07 12:34:08.794241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:45.541 [2024-10-07 12:34:08.794254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.146 ms 00:23:45.541 [2024-10-07 12:34:08.794264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.541 [2024-10-07 12:34:08.824043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.541 [2024-10-07 12:34:08.824098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:45.541 [2024-10-07 12:34:08.824113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.774 ms 00:23:45.541 [2024-10-07 12:34:08.824124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.800 [2024-10-07 12:34:08.842919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.800 [2024-10-07 12:34:08.842964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:45.800 [2024-10-07 12:34:08.842984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.761 ms 00:23:45.800 [2024-10-07 12:34:08.842995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.800 [2024-10-07 12:34:08.860847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.800 [2024-10-07 12:34:08.860884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:45.800 [2024-10-07 12:34:08.860897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.805 ms 00:23:45.800 [2024-10-07 12:34:08.860914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.800 [2024-10-07 12:34:08.861679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.800 [2024-10-07 12:34:08.861707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:45.800 [2024-10-07 12:34:08.861719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:23:45.800 [2024-10-07 12:34:08.861729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.800 [2024-10-07 12:34:08.947612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.800 [2024-10-07 12:34:08.947675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:45.800 [2024-10-07 12:34:08.947692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.999 ms 00:23:45.800 [2024-10-07 12:34:08.947703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.800 [2024-10-07 12:34:08.959198] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:45.800 [2024-10-07 12:34:08.962349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.800 [2024-10-07 12:34:08.962379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:45.800 [2024-10-07 12:34:08.962394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.590 ms 00:23:45.800 [2024-10-07 12:34:08.962409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.800 [2024-10-07 12:34:08.962511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.800 [2024-10-07 12:34:08.962524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:45.800 [2024-10-07 12:34:08.962536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:45.801 [2024-10-07 12:34:08.962546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.801 [2024-10-07 12:34:08.962640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.801 [2024-10-07 12:34:08.962657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:45.801 [2024-10-07 12:34:08.962669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:45.801 [2024-10-07 12:34:08.962678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.801 [2024-10-07 12:34:08.962707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.801 [2024-10-07 12:34:08.962718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:45.801 [2024-10-07 12:34:08.962728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:45.801 [2024-10-07 12:34:08.962737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.801 [2024-10-07 12:34:08.962768] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:45.801 [2024-10-07 12:34:08.962780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.801 [2024-10-07 12:34:08.962790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:45.801 [2024-10-07 12:34:08.962804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:45.801 [2024-10-07 12:34:08.962813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.801 [2024-10-07 12:34:08.999770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.801 [2024-10-07 12:34:08.999816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:45.801 [2024-10-07 12:34:08.999831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.992 ms 00:23:45.801 [2024-10-07 12:34:08.999842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.801 [2024-10-07 12:34:08.999935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.801 [2024-10-07 12:34:08.999949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:45.801 [2024-10-07 12:34:08.999961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:45.801 [2024-10-07 12:34:08.999971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.801 [2024-10-07 12:34:09.001137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.963 ms, result 0 00:23:47.176  [2024-10-07T12:34:11.403Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-07T12:34:12.338Z] Copying: 52/1024 [MB] (26 MBps) [2024-10-07T12:34:13.273Z] Copying: 78/1024 [MB] (26 MBps) [2024-10-07T12:34:14.648Z] Copying: 104/1024 [MB] (26 MBps) [2024-10-07T12:34:15.214Z] Copying: 130/1024 [MB] (26 MBps) [2024-10-07T12:34:16.589Z] Copying: 157/1024 [MB] (26 MBps) [2024-10-07T12:34:17.523Z] Copying: 183/1024 [MB] (26 MBps) [2024-10-07T12:34:18.456Z] Copying: 210/1024 [MB] (26 MBps) [2024-10-07T12:34:19.390Z] Copying: 236/1024 [MB] (26 MBps) [2024-10-07T12:34:20.327Z] Copying: 262/1024 [MB] (25 MBps) [2024-10-07T12:34:21.261Z] Copying: 288/1024 [MB] (26 MBps) [2024-10-07T12:34:22.635Z] Copying: 314/1024 [MB] (26 MBps) [2024-10-07T12:34:23.201Z] Copying: 340/1024 [MB] (25 MBps) [2024-10-07T12:34:24.575Z] Copying: 366/1024 [MB] (25 MBps) [2024-10-07T12:34:25.510Z] Copying: 392/1024 [MB] (26 MBps) [2024-10-07T12:34:26.444Z] Copying: 419/1024 [MB] (26 MBps) [2024-10-07T12:34:27.381Z] Copying: 445/1024 [MB] (26 MBps) [2024-10-07T12:34:28.316Z] Copying: 472/1024 [MB] (26 MBps) [2024-10-07T12:34:29.256Z] Copying: 499/1024 [MB] (26 MBps) [2024-10-07T12:34:30.189Z] Copying: 525/1024 [MB] (26 MBps) [2024-10-07T12:34:31.562Z] Copying: 552/1024 [MB] (26 MBps) [2024-10-07T12:34:32.495Z] Copying: 578/1024 [MB] (26 MBps) [2024-10-07T12:34:33.430Z] Copying: 604/1024 [MB] (25 MBps) [2024-10-07T12:34:34.366Z] Copying: 630/1024 [MB] (26 MBps) [2024-10-07T12:34:35.354Z] Copying: 656/1024 [MB] (26 MBps) [2024-10-07T12:34:36.289Z] Copying: 683/1024 [MB] (26 MBps) [2024-10-07T12:34:37.223Z] Copying: 709/1024 [MB] (26 MBps) [2024-10-07T12:34:38.597Z] Copying: 735/1024 [MB] (26 MBps) [2024-10-07T12:34:39.531Z] Copying: 761/1024 [MB] (26 MBps) [2024-10-07T12:34:40.466Z] Copying: 787/1024 [MB] (25 MBps) [2024-10-07T12:34:41.400Z] Copying: 812/1024 [MB] (25 MBps) [2024-10-07T12:34:42.334Z] Copying: 838/1024 [MB] (25 MBps) [2024-10-07T12:34:43.268Z] Copying: 863/1024 [MB] (25 MBps) [2024-10-07T12:34:44.202Z] Copying: 889/1024 [MB] (25 MBps) [2024-10-07T12:34:45.577Z] Copying: 915/1024 [MB] (26 MBps) [2024-10-07T12:34:46.512Z] Copying: 941/1024 [MB] (25 MBps) [2024-10-07T12:34:47.475Z] Copying: 966/1024 [MB] (25 MBps) [2024-10-07T12:34:48.416Z] Copying: 992/1024 [MB] (25 MBps) [2024-10-07T12:34:48.416Z] Copying: 1017/1024 [MB] (25 MBps) [2024-10-07T12:34:49.796Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-07 12:34:49.507952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.508029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:26.505 [2024-10-07 12:34:49.508047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:26.505 [2024-10-07 12:34:49.508065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.508102] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:26.505 [2024-10-07 12:34:49.512719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.512759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:26.505 [2024-10-07 12:34:49.512772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.601 ms 00:24:26.505 [2024-10-07 12:34:49.512782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.513028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.513045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:26.505 [2024-10-07 12:34:49.513056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:24:26.505 [2024-10-07 12:34:49.513066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.515888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.515919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:26.505 [2024-10-07 12:34:49.515931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.806 ms 00:24:26.505 [2024-10-07 12:34:49.515942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.521072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.521113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:26.505 [2024-10-07 12:34:49.521125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.118 ms 00:24:26.505 [2024-10-07 12:34:49.521136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.561542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.561608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:26.505 [2024-10-07 12:34:49.561624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.374 ms 00:24:26.505 [2024-10-07 12:34:49.561634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.585106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.585175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:26.505 [2024-10-07 12:34:49.585191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.447 ms 00:24:26.505 [2024-10-07 12:34:49.585202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.585379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.585396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:26.505 [2024-10-07 12:34:49.585418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:24:26.505 [2024-10-07 12:34:49.585428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.621908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.621959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:26.505 [2024-10-07 12:34:49.621974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.523 ms 00:24:26.505 [2024-10-07 12:34:49.621984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.658192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.658241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:26.505 [2024-10-07 12:34:49.658255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.221 ms 00:24:26.505 [2024-10-07 12:34:49.658265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.693720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.693770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:26.505 [2024-10-07 12:34:49.693784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.467 ms 00:24:26.505 [2024-10-07 12:34:49.693794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.729726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.505 [2024-10-07 12:34:49.729777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:26.505 [2024-10-07 12:34:49.729791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.904 ms 00:24:26.505 [2024-10-07 12:34:49.729801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.505 [2024-10-07 12:34:49.729844] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:26.505 [2024-10-07 12:34:49.729861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.729997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:26.505 [2024-10-07 12:34:49.730211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:26.506 [2024-10-07 12:34:49.730927] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:26.506 [2024-10-07 12:34:49.730937] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 639eeb8e-3c5a-4629-a801-135403ce4b53 00:24:26.506 [2024-10-07 12:34:49.730949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:26.506 [2024-10-07 12:34:49.730958] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:26.506 [2024-10-07 12:34:49.730976] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:26.506 [2024-10-07 12:34:49.730986] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:26.506 [2024-10-07 12:34:49.730996] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:26.506 [2024-10-07 12:34:49.731012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:26.506 [2024-10-07 12:34:49.731022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:26.506 [2024-10-07 12:34:49.731031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:26.506 [2024-10-07 12:34:49.731040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:26.506 [2024-10-07 12:34:49.731050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.506 [2024-10-07 12:34:49.731070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:26.506 [2024-10-07 12:34:49.731081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:24:26.506 [2024-10-07 12:34:49.731091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.506 [2024-10-07 12:34:49.751235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.506 [2024-10-07 12:34:49.751281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:26.506 [2024-10-07 12:34:49.751311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.137 ms 00:24:26.506 [2024-10-07 12:34:49.751327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.506 [2024-10-07 12:34:49.751856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.506 [2024-10-07 12:34:49.751876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:26.506 [2024-10-07 12:34:49.751888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:24:26.506 [2024-10-07 12:34:49.751898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.765 [2024-10-07 12:34:49.795352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.765 [2024-10-07 12:34:49.795415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.765 [2024-10-07 12:34:49.795431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.765 [2024-10-07 12:34:49.795447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.765 [2024-10-07 12:34:49.795515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.765 [2024-10-07 12:34:49.795526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.765 [2024-10-07 12:34:49.795535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.765 [2024-10-07 12:34:49.795545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.765 [2024-10-07 12:34:49.795623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.765 [2024-10-07 12:34:49.795636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.765 [2024-10-07 12:34:49.795646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.765 [2024-10-07 12:34:49.795656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.765 [2024-10-07 12:34:49.795677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.765 [2024-10-07 12:34:49.795687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.765 [2024-10-07 12:34:49.795697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.765 [2024-10-07 12:34:49.795707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.765 [2024-10-07 12:34:49.917129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.765 [2024-10-07 12:34:49.917211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.765 [2024-10-07 12:34:49.917227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.765 [2024-10-07 12:34:49.917244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.765 [2024-10-07 12:34:50.016633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.765 [2024-10-07 12:34:50.016699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.765 [2024-10-07 12:34:50.016714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.765 [2024-10-07 12:34:50.016724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.765 [2024-10-07 12:34:50.016815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.765 [2024-10-07 12:34:50.016827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.765 [2024-10-07 12:34:50.016837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.765 [2024-10-07 12:34:50.016847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.765 [2024-10-07 12:34:50.016889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.766 [2024-10-07 12:34:50.016917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.766 [2024-10-07 12:34:50.016927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.766 [2024-10-07 12:34:50.016953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.766 [2024-10-07 12:34:50.017052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.766 [2024-10-07 12:34:50.017066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.766 [2024-10-07 12:34:50.017077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.766 [2024-10-07 12:34:50.017087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.766 [2024-10-07 12:34:50.017124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.766 [2024-10-07 12:34:50.017140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:26.766 [2024-10-07 12:34:50.017151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.766 [2024-10-07 12:34:50.017161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.766 [2024-10-07 12:34:50.017222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.766 [2024-10-07 12:34:50.017237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.766 [2024-10-07 12:34:50.017247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.766 [2024-10-07 12:34:50.017257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.766 [2024-10-07 12:34:50.017321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:26.766 [2024-10-07 12:34:50.017334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.766 [2024-10-07 12:34:50.017343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:26.766 [2024-10-07 12:34:50.017353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.766 [2024-10-07 12:34:50.017500] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 510.336 ms, result 0 00:24:28.139 00:24:28.139 00:24:28.139 12:34:51 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:30.038 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:30.038 12:34:52 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:30.038 [2024-10-07 12:34:52.948190] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:24:30.038 [2024-10-07 12:34:52.948327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77937 ] 00:24:30.038 [2024-10-07 12:34:53.118658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.038 [2024-10-07 12:34:53.319926] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.604 [2024-10-07 12:34:53.652935] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.604 [2024-10-07 12:34:53.653018] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.604 [2024-10-07 12:34:53.812913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.812971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:30.604 [2024-10-07 12:34:53.812986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:30.604 [2024-10-07 12:34:53.813016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.813065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.813088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:30.604 [2024-10-07 12:34:53.813098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:30.604 [2024-10-07 12:34:53.813108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.813128] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:30.604 [2024-10-07 12:34:53.814111] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:30.604 [2024-10-07 12:34:53.814140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.814151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:30.604 [2024-10-07 12:34:53.814162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:24:30.604 [2024-10-07 12:34:53.814171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.815603] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:30.604 [2024-10-07 12:34:53.833866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.833914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:30.604 [2024-10-07 12:34:53.833945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.293 ms 00:24:30.604 [2024-10-07 12:34:53.833955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.834014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.834026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:30.604 [2024-10-07 12:34:53.834037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:30.604 [2024-10-07 12:34:53.834047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.840772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.840804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:30.604 [2024-10-07 12:34:53.840831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.666 ms 00:24:30.604 [2024-10-07 12:34:53.840841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.840925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.840940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:30.604 [2024-10-07 12:34:53.840950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:30.604 [2024-10-07 12:34:53.840960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.841003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.841014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:30.604 [2024-10-07 12:34:53.841025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:30.604 [2024-10-07 12:34:53.841034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.841058] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:30.604 [2024-10-07 12:34:53.845813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.845846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:30.604 [2024-10-07 12:34:53.845873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.769 ms 00:24:30.604 [2024-10-07 12:34:53.845883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.845912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.845946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:30.604 [2024-10-07 12:34:53.845957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:30.604 [2024-10-07 12:34:53.845967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.846025] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:30.604 [2024-10-07 12:34:53.846067] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:30.604 [2024-10-07 12:34:53.846108] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:30.604 [2024-10-07 12:34:53.846125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:30.604 [2024-10-07 12:34:53.846215] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:30.604 [2024-10-07 12:34:53.846228] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:30.604 [2024-10-07 12:34:53.846241] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:30.604 [2024-10-07 12:34:53.846259] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:30.604 [2024-10-07 12:34:53.846271] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:30.604 [2024-10-07 12:34:53.846283] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:30.604 [2024-10-07 12:34:53.846293] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:30.604 [2024-10-07 12:34:53.846303] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:30.604 [2024-10-07 12:34:53.846312] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:30.604 [2024-10-07 12:34:53.846323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.846333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:30.604 [2024-10-07 12:34:53.846344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:24:30.604 [2024-10-07 12:34:53.846353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.846430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.604 [2024-10-07 12:34:53.846444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:30.604 [2024-10-07 12:34:53.846454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:30.604 [2024-10-07 12:34:53.846464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.604 [2024-10-07 12:34:53.846555] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:30.604 [2024-10-07 12:34:53.846579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:30.604 [2024-10-07 12:34:53.846590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:30.604 [2024-10-07 12:34:53.846600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:30.604 [2024-10-07 12:34:53.846620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:30.604 [2024-10-07 12:34:53.846639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:30.604 [2024-10-07 12:34:53.846649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:30.604 [2024-10-07 12:34:53.846668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:30.604 [2024-10-07 12:34:53.846678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:30.604 [2024-10-07 12:34:53.846687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:30.604 [2024-10-07 12:34:53.846705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:30.604 [2024-10-07 12:34:53.846715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:30.604 [2024-10-07 12:34:53.846724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:30.604 [2024-10-07 12:34:53.846743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:30.604 [2024-10-07 12:34:53.846752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:30.604 [2024-10-07 12:34:53.846770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.604 [2024-10-07 12:34:53.846788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:30.604 [2024-10-07 12:34:53.846797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.604 [2024-10-07 12:34:53.846815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:30.604 [2024-10-07 12:34:53.846824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.604 [2024-10-07 12:34:53.846842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:30.604 [2024-10-07 12:34:53.846851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.604 [2024-10-07 12:34:53.846870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:30.604 [2024-10-07 12:34:53.846879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:30.604 [2024-10-07 12:34:53.846896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:30.604 [2024-10-07 12:34:53.846917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:30.604 [2024-10-07 12:34:53.846926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:30.604 [2024-10-07 12:34:53.846935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:30.604 [2024-10-07 12:34:53.846944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:30.604 [2024-10-07 12:34:53.846953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:30.604 [2024-10-07 12:34:53.846978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:30.604 [2024-10-07 12:34:53.846988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.604 [2024-10-07 12:34:53.846998] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:30.604 [2024-10-07 12:34:53.847008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:30.604 [2024-10-07 12:34:53.847021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:30.604 [2024-10-07 12:34:53.847031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.604 [2024-10-07 12:34:53.847041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:30.604 [2024-10-07 12:34:53.847050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:30.604 [2024-10-07 12:34:53.847059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:30.604 [2024-10-07 12:34:53.847069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:30.604 [2024-10-07 12:34:53.847078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:30.604 [2024-10-07 12:34:53.847088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:30.604 [2024-10-07 12:34:53.847098] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:30.604 [2024-10-07 12:34:53.847111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.604 [2024-10-07 12:34:53.847122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:30.604 [2024-10-07 12:34:53.847132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:30.604 [2024-10-07 12:34:53.847142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:30.604 [2024-10-07 12:34:53.847153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:30.604 [2024-10-07 12:34:53.847164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:30.604 [2024-10-07 12:34:53.847174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:30.604 [2024-10-07 12:34:53.847184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:30.604 [2024-10-07 12:34:53.847194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:30.604 [2024-10-07 12:34:53.847204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:30.604 [2024-10-07 12:34:53.847214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:30.604 [2024-10-07 12:34:53.847225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:30.604 [2024-10-07 12:34:53.847235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:30.604 [2024-10-07 12:34:53.847245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:30.604 [2024-10-07 12:34:53.847255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:30.605 [2024-10-07 12:34:53.847265] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:30.605 [2024-10-07 12:34:53.847276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.605 [2024-10-07 12:34:53.847287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:30.605 [2024-10-07 12:34:53.847296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:30.605 [2024-10-07 12:34:53.847306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:30.605 [2024-10-07 12:34:53.847318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:30.605 [2024-10-07 12:34:53.847329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.605 [2024-10-07 12:34:53.847339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:30.605 [2024-10-07 12:34:53.847349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:24:30.605 [2024-10-07 12:34:53.847358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.605 [2024-10-07 12:34:53.892676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.605 [2024-10-07 12:34:53.892724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:30.605 [2024-10-07 12:34:53.892754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.343 ms 00:24:30.605 [2024-10-07 12:34:53.892765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.605 [2024-10-07 12:34:53.892857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.605 [2024-10-07 12:34:53.892868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:30.605 [2024-10-07 12:34:53.892879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:30.605 [2024-10-07 12:34:53.892889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:53.937121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:53.937163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:30.862 [2024-10-07 12:34:53.937184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.233 ms 00:24:30.862 [2024-10-07 12:34:53.937195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:53.937252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:53.937263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:30.862 [2024-10-07 12:34:53.937273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:30.862 [2024-10-07 12:34:53.937283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:53.937774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:53.937796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:30.862 [2024-10-07 12:34:53.937808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:24:30.862 [2024-10-07 12:34:53.937828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:53.937959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:53.937974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:30.862 [2024-10-07 12:34:53.937985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:24:30.862 [2024-10-07 12:34:53.937994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:53.957397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:53.957440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:30.862 [2024-10-07 12:34:53.957455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.412 ms 00:24:30.862 [2024-10-07 12:34:53.957466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:53.976586] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:30.862 [2024-10-07 12:34:53.976641] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:30.862 [2024-10-07 12:34:53.976673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:53.976683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:30.862 [2024-10-07 12:34:53.976695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.119 ms 00:24:30.862 [2024-10-07 12:34:53.976706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:54.006111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:54.006154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:30.862 [2024-10-07 12:34:54.006169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.411 ms 00:24:30.862 [2024-10-07 12:34:54.006179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:54.024447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:54.024486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:30.862 [2024-10-07 12:34:54.024499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.237 ms 00:24:30.862 [2024-10-07 12:34:54.024509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:54.042370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:54.042411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:30.862 [2024-10-07 12:34:54.042424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.823 ms 00:24:30.862 [2024-10-07 12:34:54.042434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:54.043220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:54.043252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:30.862 [2024-10-07 12:34:54.043264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:24:30.862 [2024-10-07 12:34:54.043274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:54.128459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:54.128521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:30.862 [2024-10-07 12:34:54.128538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.299 ms 00:24:30.862 [2024-10-07 12:34:54.128565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:54.139219] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:30.862 [2024-10-07 12:34:54.142297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:54.142327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:30.862 [2024-10-07 12:34:54.142355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.703 ms 00:24:30.862 [2024-10-07 12:34:54.142376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:54.142478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:54.142492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:30.862 [2024-10-07 12:34:54.142503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:30.862 [2024-10-07 12:34:54.142513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:54.142636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:54.142670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:30.862 [2024-10-07 12:34:54.142681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:30.862 [2024-10-07 12:34:54.142691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:54.142724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:54.142735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:30.862 [2024-10-07 12:34:54.142745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:30.862 [2024-10-07 12:34:54.142755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.862 [2024-10-07 12:34:54.142788] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:30.862 [2024-10-07 12:34:54.142801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.862 [2024-10-07 12:34:54.142811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:30.862 [2024-10-07 12:34:54.142825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:30.862 [2024-10-07 12:34:54.142835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.120 [2024-10-07 12:34:54.178325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.120 [2024-10-07 12:34:54.178381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:31.120 [2024-10-07 12:34:54.178411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.529 ms 00:24:31.120 [2024-10-07 12:34:54.178421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.120 [2024-10-07 12:34:54.178496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.120 [2024-10-07 12:34:54.178508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:31.120 [2024-10-07 12:34:54.178518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:31.120 [2024-10-07 12:34:54.178528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.120 [2024-10-07 12:34:54.179743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.004 ms, result 0 00:24:32.052  [2024-10-07T12:34:56.274Z] Copying: 25/1024 [MB] (25 MBps) [2024-10-07T12:34:57.207Z] Copying: 50/1024 [MB] (24 MBps) [2024-10-07T12:34:58.578Z] Copying: 75/1024 [MB] (25 MBps) [2024-10-07T12:34:59.512Z] Copying: 99/1024 [MB] (23 MBps) [2024-10-07T12:35:00.444Z] Copying: 123/1024 [MB] (23 MBps) [2024-10-07T12:35:01.418Z] Copying: 148/1024 [MB] (24 MBps) [2024-10-07T12:35:02.351Z] Copying: 173/1024 [MB] (25 MBps) [2024-10-07T12:35:03.284Z] Copying: 198/1024 [MB] (25 MBps) [2024-10-07T12:35:04.217Z] Copying: 224/1024 [MB] (25 MBps) [2024-10-07T12:35:05.593Z] Copying: 248/1024 [MB] (24 MBps) [2024-10-07T12:35:06.528Z] Copying: 273/1024 [MB] (24 MBps) [2024-10-07T12:35:07.463Z] Copying: 297/1024 [MB] (24 MBps) [2024-10-07T12:35:08.399Z] Copying: 323/1024 [MB] (25 MBps) [2024-10-07T12:35:09.332Z] Copying: 347/1024 [MB] (24 MBps) [2024-10-07T12:35:10.268Z] Copying: 372/1024 [MB] (24 MBps) [2024-10-07T12:35:11.203Z] Copying: 396/1024 [MB] (24 MBps) [2024-10-07T12:35:12.579Z] Copying: 421/1024 [MB] (24 MBps) [2024-10-07T12:35:13.514Z] Copying: 445/1024 [MB] (24 MBps) [2024-10-07T12:35:14.449Z] Copying: 469/1024 [MB] (23 MBps) [2024-10-07T12:35:15.386Z] Copying: 493/1024 [MB] (23 MBps) [2024-10-07T12:35:16.325Z] Copying: 517/1024 [MB] (24 MBps) [2024-10-07T12:35:17.260Z] Copying: 542/1024 [MB] (24 MBps) [2024-10-07T12:35:18.195Z] Copying: 566/1024 [MB] (24 MBps) [2024-10-07T12:35:19.568Z] Copying: 591/1024 [MB] (24 MBps) [2024-10-07T12:35:20.501Z] Copying: 617/1024 [MB] (25 MBps) [2024-10-07T12:35:21.436Z] Copying: 642/1024 [MB] (25 MBps) [2024-10-07T12:35:22.370Z] Copying: 667/1024 [MB] (25 MBps) [2024-10-07T12:35:23.305Z] Copying: 692/1024 [MB] (25 MBps) [2024-10-07T12:35:24.239Z] Copying: 718/1024 [MB] (25 MBps) [2024-10-07T12:35:25.174Z] Copying: 744/1024 [MB] (25 MBps) [2024-10-07T12:35:26.549Z] Copying: 769/1024 [MB] (25 MBps) [2024-10-07T12:35:27.485Z] Copying: 794/1024 [MB] (25 MBps) [2024-10-07T12:35:28.420Z] Copying: 820/1024 [MB] (25 MBps) [2024-10-07T12:35:29.351Z] Copying: 845/1024 [MB] (24 MBps) [2024-10-07T12:35:30.285Z] Copying: 869/1024 [MB] (24 MBps) [2024-10-07T12:35:31.219Z] Copying: 895/1024 [MB] (25 MBps) [2024-10-07T12:35:32.154Z] Copying: 920/1024 [MB] (25 MBps) [2024-10-07T12:35:33.528Z] Copying: 945/1024 [MB] (24 MBps) [2024-10-07T12:35:34.462Z] Copying: 970/1024 [MB] (24 MBps) [2024-10-07T12:35:35.395Z] Copying: 994/1024 [MB] (24 MBps) [2024-10-07T12:35:36.330Z] Copying: 1019/1024 [MB] (24 MBps) [2024-10-07T12:35:36.330Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-10-07 12:35:35.987079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:35.987167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:13.039 [2024-10-07 12:35:35.987185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:13.039 [2024-10-07 12:35:35.987195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.039 [2024-10-07 12:35:35.989099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:13.039 [2024-10-07 12:35:35.995512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:35.995554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:13.039 [2024-10-07 12:35:35.995568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.382 ms 00:25:13.039 [2024-10-07 12:35:35.995584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.039 [2024-10-07 12:35:36.004457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:36.004501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:13.039 [2024-10-07 12:35:36.004516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.595 ms 00:25:13.039 [2024-10-07 12:35:36.004526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.039 [2024-10-07 12:35:36.027559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:36.027602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:13.039 [2024-10-07 12:35:36.027618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.052 ms 00:25:13.039 [2024-10-07 12:35:36.027630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.039 [2024-10-07 12:35:36.032572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:36.032608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:13.039 [2024-10-07 12:35:36.032620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.910 ms 00:25:13.039 [2024-10-07 12:35:36.032630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.039 [2024-10-07 12:35:36.068908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:36.068953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:13.039 [2024-10-07 12:35:36.068967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.278 ms 00:25:13.039 [2024-10-07 12:35:36.068977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.039 [2024-10-07 12:35:36.089492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:36.089552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:13.039 [2024-10-07 12:35:36.089583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.495 ms 00:25:13.039 [2024-10-07 12:35:36.089593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.039 [2024-10-07 12:35:36.202220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:36.202302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:13.039 [2024-10-07 12:35:36.202317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.766 ms 00:25:13.039 [2024-10-07 12:35:36.202328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.039 [2024-10-07 12:35:36.239115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:36.239161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:13.039 [2024-10-07 12:35:36.239177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.827 ms 00:25:13.039 [2024-10-07 12:35:36.239187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.039 [2024-10-07 12:35:36.276171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:36.276214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:13.039 [2024-10-07 12:35:36.276228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.004 ms 00:25:13.039 [2024-10-07 12:35:36.276239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.039 [2024-10-07 12:35:36.312319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.039 [2024-10-07 12:35:36.312362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:13.039 [2024-10-07 12:35:36.312377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.097 ms 00:25:13.039 [2024-10-07 12:35:36.312403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.299 [2024-10-07 12:35:36.348750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.299 [2024-10-07 12:35:36.348792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:13.299 [2024-10-07 12:35:36.348807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.277 ms 00:25:13.299 [2024-10-07 12:35:36.348817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.299 [2024-10-07 12:35:36.348892] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:13.299 [2024-10-07 12:35:36.348923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100608 / 261120 wr_cnt: 1 state: open 00:25:13.299 [2024-10-07 12:35:36.348936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.348947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.348959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.348971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.348982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.348993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:13.299 [2024-10-07 12:35:36.349790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:13.300 [2024-10-07 12:35:36.349996] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:13.300 [2024-10-07 12:35:36.350008] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 639eeb8e-3c5a-4629-a801-135403ce4b53 00:25:13.300 [2024-10-07 12:35:36.350025] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100608 00:25:13.300 [2024-10-07 12:35:36.350035] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101568 00:25:13.300 [2024-10-07 12:35:36.350045] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100608 00:25:13.300 [2024-10-07 12:35:36.350056] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:25:13.300 [2024-10-07 12:35:36.350066] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:13.300 [2024-10-07 12:35:36.350076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:13.300 [2024-10-07 12:35:36.350086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:13.300 [2024-10-07 12:35:36.350095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:13.300 [2024-10-07 12:35:36.350105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:13.300 [2024-10-07 12:35:36.350114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.300 [2024-10-07 12:35:36.350135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:13.300 [2024-10-07 12:35:36.350146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:25:13.300 [2024-10-07 12:35:36.350156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.300 [2024-10-07 12:35:36.369926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.300 [2024-10-07 12:35:36.369961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:13.300 [2024-10-07 12:35:36.369975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.764 ms 00:25:13.300 [2024-10-07 12:35:36.369986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.300 [2024-10-07 12:35:36.370520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.300 [2024-10-07 12:35:36.370541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:13.300 [2024-10-07 12:35:36.370559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:25:13.300 [2024-10-07 12:35:36.370568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.300 [2024-10-07 12:35:36.415414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.300 [2024-10-07 12:35:36.415459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:13.300 [2024-10-07 12:35:36.415481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.300 [2024-10-07 12:35:36.415492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.300 [2024-10-07 12:35:36.415550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.300 [2024-10-07 12:35:36.415561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:13.300 [2024-10-07 12:35:36.415578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.300 [2024-10-07 12:35:36.415588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.300 [2024-10-07 12:35:36.415674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.300 [2024-10-07 12:35:36.415691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:13.300 [2024-10-07 12:35:36.415702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.300 [2024-10-07 12:35:36.415712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.300 [2024-10-07 12:35:36.415729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.300 [2024-10-07 12:35:36.415742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:13.300 [2024-10-07 12:35:36.415752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.300 [2024-10-07 12:35:36.415777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.300 [2024-10-07 12:35:36.538046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.300 [2024-10-07 12:35:36.538114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:13.300 [2024-10-07 12:35:36.538129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.300 [2024-10-07 12:35:36.538155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.559 [2024-10-07 12:35:36.637841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.559 [2024-10-07 12:35:36.637923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:13.559 [2024-10-07 12:35:36.637938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.559 [2024-10-07 12:35:36.637954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.559 [2024-10-07 12:35:36.638056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.559 [2024-10-07 12:35:36.638068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:13.559 [2024-10-07 12:35:36.638078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.559 [2024-10-07 12:35:36.638088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.559 [2024-10-07 12:35:36.638139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.559 [2024-10-07 12:35:36.638152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:13.559 [2024-10-07 12:35:36.638162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.559 [2024-10-07 12:35:36.638172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.559 [2024-10-07 12:35:36.638282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.559 [2024-10-07 12:35:36.638295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:13.559 [2024-10-07 12:35:36.638306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.559 [2024-10-07 12:35:36.638316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.559 [2024-10-07 12:35:36.638361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.559 [2024-10-07 12:35:36.638374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:13.559 [2024-10-07 12:35:36.638384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.559 [2024-10-07 12:35:36.638394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.559 [2024-10-07 12:35:36.638454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.559 [2024-10-07 12:35:36.638469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:13.559 [2024-10-07 12:35:36.638479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.559 [2024-10-07 12:35:36.638489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.559 [2024-10-07 12:35:36.638540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.559 [2024-10-07 12:35:36.638554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:13.559 [2024-10-07 12:35:36.638565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.559 [2024-10-07 12:35:36.638575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.559 [2024-10-07 12:35:36.638722] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 654.166 ms, result 0 00:25:15.461 00:25:15.461 00:25:15.461 12:35:38 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:15.461 [2024-10-07 12:35:38.419032] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:25:15.461 [2024-10-07 12:35:38.419149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78396 ] 00:25:15.461 [2024-10-07 12:35:38.591248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.719 [2024-10-07 12:35:38.806004] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.977 [2024-10-07 12:35:39.162974] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:15.977 [2024-10-07 12:35:39.163056] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:16.236 [2024-10-07 12:35:39.324108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.324167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:16.237 [2024-10-07 12:35:39.324184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:16.237 [2024-10-07 12:35:39.324198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.324253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.324265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.237 [2024-10-07 12:35:39.324276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:16.237 [2024-10-07 12:35:39.324286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.324306] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:16.237 [2024-10-07 12:35:39.325308] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:16.237 [2024-10-07 12:35:39.325335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.325346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.237 [2024-10-07 12:35:39.325357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:25:16.237 [2024-10-07 12:35:39.325367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.326855] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:16.237 [2024-10-07 12:35:39.346618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.346657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:16.237 [2024-10-07 12:35:39.346673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.795 ms 00:25:16.237 [2024-10-07 12:35:39.346684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.346753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.346766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:16.237 [2024-10-07 12:35:39.346777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:16.237 [2024-10-07 12:35:39.346787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.353582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.353611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.237 [2024-10-07 12:35:39.353624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.732 ms 00:25:16.237 [2024-10-07 12:35:39.353634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.353712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.353726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.237 [2024-10-07 12:35:39.353737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:16.237 [2024-10-07 12:35:39.353746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.353792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.353803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:16.237 [2024-10-07 12:35:39.353814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:16.237 [2024-10-07 12:35:39.353823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.353849] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:16.237 [2024-10-07 12:35:39.358661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.358688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.237 [2024-10-07 12:35:39.358700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.826 ms 00:25:16.237 [2024-10-07 12:35:39.358726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.358759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.358770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:16.237 [2024-10-07 12:35:39.358781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:16.237 [2024-10-07 12:35:39.358791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.358856] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:16.237 [2024-10-07 12:35:39.358880] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:16.237 [2024-10-07 12:35:39.358927] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:16.237 [2024-10-07 12:35:39.358945] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:16.237 [2024-10-07 12:35:39.359054] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:16.237 [2024-10-07 12:35:39.359074] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:16.237 [2024-10-07 12:35:39.359087] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:16.237 [2024-10-07 12:35:39.359105] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:16.237 [2024-10-07 12:35:39.359118] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:16.237 [2024-10-07 12:35:39.359129] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:16.237 [2024-10-07 12:35:39.359139] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:16.237 [2024-10-07 12:35:39.359149] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:16.237 [2024-10-07 12:35:39.359159] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:16.237 [2024-10-07 12:35:39.359170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.359180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:16.237 [2024-10-07 12:35:39.359191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:25:16.237 [2024-10-07 12:35:39.359201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.359279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.237 [2024-10-07 12:35:39.359293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:16.237 [2024-10-07 12:35:39.359303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:16.237 [2024-10-07 12:35:39.359313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.237 [2024-10-07 12:35:39.359406] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:16.237 [2024-10-07 12:35:39.359421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:16.237 [2024-10-07 12:35:39.359432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.237 [2024-10-07 12:35:39.359442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:16.237 [2024-10-07 12:35:39.359462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:16.237 [2024-10-07 12:35:39.359482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:16.237 [2024-10-07 12:35:39.359492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.237 [2024-10-07 12:35:39.359513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:16.237 [2024-10-07 12:35:39.359523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:16.237 [2024-10-07 12:35:39.359533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.237 [2024-10-07 12:35:39.359552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:16.237 [2024-10-07 12:35:39.359562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:16.237 [2024-10-07 12:35:39.359571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:16.237 [2024-10-07 12:35:39.359590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:16.237 [2024-10-07 12:35:39.359599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:16.237 [2024-10-07 12:35:39.359618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.237 [2024-10-07 12:35:39.359636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:16.237 [2024-10-07 12:35:39.359645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.237 [2024-10-07 12:35:39.359664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:16.237 [2024-10-07 12:35:39.359673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.237 [2024-10-07 12:35:39.359691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:16.237 [2024-10-07 12:35:39.359700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.237 [2024-10-07 12:35:39.359718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:16.237 [2024-10-07 12:35:39.359727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.237 [2024-10-07 12:35:39.359745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:16.237 [2024-10-07 12:35:39.359754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:16.237 [2024-10-07 12:35:39.359763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.237 [2024-10-07 12:35:39.359772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:16.237 [2024-10-07 12:35:39.359781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:16.237 [2024-10-07 12:35:39.359790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:16.237 [2024-10-07 12:35:39.359808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:16.237 [2024-10-07 12:35:39.359818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359827] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:16.237 [2024-10-07 12:35:39.359838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:16.237 [2024-10-07 12:35:39.359851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.237 [2024-10-07 12:35:39.359861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.237 [2024-10-07 12:35:39.359871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:16.238 [2024-10-07 12:35:39.359881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:16.238 [2024-10-07 12:35:39.359890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:16.238 [2024-10-07 12:35:39.359912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:16.238 [2024-10-07 12:35:39.359922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:16.238 [2024-10-07 12:35:39.359931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:16.238 [2024-10-07 12:35:39.359943] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:16.238 [2024-10-07 12:35:39.359955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.238 [2024-10-07 12:35:39.359967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:16.238 [2024-10-07 12:35:39.359978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:16.238 [2024-10-07 12:35:39.359988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:16.238 [2024-10-07 12:35:39.359998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:16.238 [2024-10-07 12:35:39.360008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:16.238 [2024-10-07 12:35:39.360018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:16.238 [2024-10-07 12:35:39.360028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:16.238 [2024-10-07 12:35:39.360038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:16.238 [2024-10-07 12:35:39.360047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:16.238 [2024-10-07 12:35:39.360057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:16.238 [2024-10-07 12:35:39.360067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:16.238 [2024-10-07 12:35:39.360077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:16.238 [2024-10-07 12:35:39.360087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:16.238 [2024-10-07 12:35:39.360097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:16.238 [2024-10-07 12:35:39.360107] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:16.238 [2024-10-07 12:35:39.360118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.238 [2024-10-07 12:35:39.360129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:16.238 [2024-10-07 12:35:39.360151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:16.238 [2024-10-07 12:35:39.360160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:16.238 [2024-10-07 12:35:39.360171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:16.238 [2024-10-07 12:35:39.360181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.238 [2024-10-07 12:35:39.360192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:16.238 [2024-10-07 12:35:39.360201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:25:16.238 [2024-10-07 12:35:39.360211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.238 [2024-10-07 12:35:39.407331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.238 [2024-10-07 12:35:39.407372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.238 [2024-10-07 12:35:39.407402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.147 ms 00:25:16.238 [2024-10-07 12:35:39.407413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.238 [2024-10-07 12:35:39.407503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.238 [2024-10-07 12:35:39.407515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:16.238 [2024-10-07 12:35:39.407526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:16.238 [2024-10-07 12:35:39.407537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.238 [2024-10-07 12:35:39.453697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.238 [2024-10-07 12:35:39.453740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.238 [2024-10-07 12:35:39.453758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.168 ms 00:25:16.238 [2024-10-07 12:35:39.453768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.238 [2024-10-07 12:35:39.453813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.238 [2024-10-07 12:35:39.453823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.238 [2024-10-07 12:35:39.453834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:16.238 [2024-10-07 12:35:39.453843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.238 [2024-10-07 12:35:39.454359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.238 [2024-10-07 12:35:39.454379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.238 [2024-10-07 12:35:39.454390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:25:16.238 [2024-10-07 12:35:39.454407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.238 [2024-10-07 12:35:39.454527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.238 [2024-10-07 12:35:39.454541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.238 [2024-10-07 12:35:39.454551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:16.238 [2024-10-07 12:35:39.454561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.238 [2024-10-07 12:35:39.473495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.238 [2024-10-07 12:35:39.473533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.238 [2024-10-07 12:35:39.473559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.943 ms 00:25:16.238 [2024-10-07 12:35:39.473570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.238 [2024-10-07 12:35:39.493024] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:16.238 [2024-10-07 12:35:39.493065] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:16.238 [2024-10-07 12:35:39.493080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.238 [2024-10-07 12:35:39.493090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:16.238 [2024-10-07 12:35:39.493102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.425 ms 00:25:16.238 [2024-10-07 12:35:39.493112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.238 [2024-10-07 12:35:39.522151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.238 [2024-10-07 12:35:39.522198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:16.238 [2024-10-07 12:35:39.522214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.039 ms 00:25:16.238 [2024-10-07 12:35:39.522225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.518 [2024-10-07 12:35:39.541023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.518 [2024-10-07 12:35:39.541062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:16.518 [2024-10-07 12:35:39.541076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.767 ms 00:25:16.518 [2024-10-07 12:35:39.541086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.518 [2024-10-07 12:35:39.558989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.518 [2024-10-07 12:35:39.559022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:16.518 [2024-10-07 12:35:39.559035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.892 ms 00:25:16.518 [2024-10-07 12:35:39.559044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.518 [2024-10-07 12:35:39.559835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.518 [2024-10-07 12:35:39.559859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:16.518 [2024-10-07 12:35:39.559872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:25:16.518 [2024-10-07 12:35:39.559882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.518 [2024-10-07 12:35:39.645420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.518 [2024-10-07 12:35:39.645482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:16.518 [2024-10-07 12:35:39.645515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.634 ms 00:25:16.518 [2024-10-07 12:35:39.645527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.518 [2024-10-07 12:35:39.656281] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:16.518 [2024-10-07 12:35:39.659193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.518 [2024-10-07 12:35:39.659221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:16.518 [2024-10-07 12:35:39.659251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.628 ms 00:25:16.518 [2024-10-07 12:35:39.659266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.518 [2024-10-07 12:35:39.659355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.518 [2024-10-07 12:35:39.659369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:16.518 [2024-10-07 12:35:39.659380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:16.518 [2024-10-07 12:35:39.659390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.518 [2024-10-07 12:35:39.660872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.518 [2024-10-07 12:35:39.660925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:16.518 [2024-10-07 12:35:39.660938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:25:16.518 [2024-10-07 12:35:39.660948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.518 [2024-10-07 12:35:39.660992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.518 [2024-10-07 12:35:39.661003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:16.518 [2024-10-07 12:35:39.661013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:16.518 [2024-10-07 12:35:39.661023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.518 [2024-10-07 12:35:39.661058] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:16.518 [2024-10-07 12:35:39.661070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.518 [2024-10-07 12:35:39.661080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:16.518 [2024-10-07 12:35:39.661095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:16.518 [2024-10-07 12:35:39.661105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.518 [2024-10-07 12:35:39.696757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.518 [2024-10-07 12:35:39.696794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:16.518 [2024-10-07 12:35:39.696809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.690 ms 00:25:16.519 [2024-10-07 12:35:39.696819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.519 [2024-10-07 12:35:39.696895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.519 [2024-10-07 12:35:39.696916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:16.519 [2024-10-07 12:35:39.696927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:16.519 [2024-10-07 12:35:39.696937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.519 [2024-10-07 12:35:39.698023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.043 ms, result 0 00:25:18.010  [2024-10-07T12:35:42.233Z] Copying: 20/1024 [MB] (20 MBps) [2024-10-07T12:35:43.167Z] Copying: 46/1024 [MB] (25 MBps) [2024-10-07T12:35:44.101Z] Copying: 72/1024 [MB] (26 MBps) [2024-10-07T12:35:45.035Z] Copying: 98/1024 [MB] (26 MBps) [2024-10-07T12:35:45.969Z] Copying: 124/1024 [MB] (25 MBps) [2024-10-07T12:35:47.343Z] Copying: 150/1024 [MB] (25 MBps) [2024-10-07T12:35:47.911Z] Copying: 176/1024 [MB] (26 MBps) [2024-10-07T12:35:49.297Z] Copying: 202/1024 [MB] (25 MBps) [2024-10-07T12:35:50.234Z] Copying: 227/1024 [MB] (25 MBps) [2024-10-07T12:35:51.171Z] Copying: 253/1024 [MB] (26 MBps) [2024-10-07T12:35:52.107Z] Copying: 280/1024 [MB] (26 MBps) [2024-10-07T12:35:53.044Z] Copying: 306/1024 [MB] (26 MBps) [2024-10-07T12:35:53.980Z] Copying: 332/1024 [MB] (26 MBps) [2024-10-07T12:35:54.917Z] Copying: 358/1024 [MB] (25 MBps) [2024-10-07T12:35:56.293Z] Copying: 384/1024 [MB] (26 MBps) [2024-10-07T12:35:57.230Z] Copying: 410/1024 [MB] (25 MBps) [2024-10-07T12:35:58.166Z] Copying: 436/1024 [MB] (25 MBps) [2024-10-07T12:35:59.102Z] Copying: 462/1024 [MB] (25 MBps) [2024-10-07T12:36:00.038Z] Copying: 487/1024 [MB] (25 MBps) [2024-10-07T12:36:00.975Z] Copying: 513/1024 [MB] (25 MBps) [2024-10-07T12:36:01.912Z] Copying: 539/1024 [MB] (26 MBps) [2024-10-07T12:36:03.291Z] Copying: 565/1024 [MB] (25 MBps) [2024-10-07T12:36:04.227Z] Copying: 590/1024 [MB] (25 MBps) [2024-10-07T12:36:05.163Z] Copying: 616/1024 [MB] (25 MBps) [2024-10-07T12:36:06.134Z] Copying: 642/1024 [MB] (25 MBps) [2024-10-07T12:36:07.070Z] Copying: 667/1024 [MB] (25 MBps) [2024-10-07T12:36:08.007Z] Copying: 693/1024 [MB] (25 MBps) [2024-10-07T12:36:08.941Z] Copying: 719/1024 [MB] (25 MBps) [2024-10-07T12:36:09.878Z] Copying: 744/1024 [MB] (25 MBps) [2024-10-07T12:36:11.256Z] Copying: 770/1024 [MB] (25 MBps) [2024-10-07T12:36:12.194Z] Copying: 796/1024 [MB] (25 MBps) [2024-10-07T12:36:13.131Z] Copying: 821/1024 [MB] (25 MBps) [2024-10-07T12:36:14.068Z] Copying: 845/1024 [MB] (23 MBps) [2024-10-07T12:36:15.004Z] Copying: 871/1024 [MB] (26 MBps) [2024-10-07T12:36:15.980Z] Copying: 897/1024 [MB] (25 MBps) [2024-10-07T12:36:16.916Z] Copying: 923/1024 [MB] (26 MBps) [2024-10-07T12:36:18.292Z] Copying: 948/1024 [MB] (25 MBps) [2024-10-07T12:36:19.227Z] Copying: 974/1024 [MB] (25 MBps) [2024-10-07T12:36:19.794Z] Copying: 1000/1024 [MB] (25 MBps) [2024-10-07T12:36:20.362Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-07 12:36:20.233017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.071 [2024-10-07 12:36:20.233534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:57.071 [2024-10-07 12:36:20.233658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:57.071 [2024-10-07 12:36:20.233790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.071 [2024-10-07 12:36:20.233868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:57.071 [2024-10-07 12:36:20.239567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.071 [2024-10-07 12:36:20.239722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:57.071 [2024-10-07 12:36:20.239809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.546 ms 00:25:57.071 [2024-10-07 12:36:20.239895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.071 [2024-10-07 12:36:20.240181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.071 [2024-10-07 12:36:20.240306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:57.071 [2024-10-07 12:36:20.240395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:25:57.071 [2024-10-07 12:36:20.240433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.071 [2024-10-07 12:36:20.245451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.071 [2024-10-07 12:36:20.245604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:57.071 [2024-10-07 12:36:20.245697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.921 ms 00:25:57.071 [2024-10-07 12:36:20.245715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.071 [2024-10-07 12:36:20.252042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.071 [2024-10-07 12:36:20.252186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:57.071 [2024-10-07 12:36:20.252278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.287 ms 00:25:57.071 [2024-10-07 12:36:20.252315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.072 [2024-10-07 12:36:20.291119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.072 [2024-10-07 12:36:20.291312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:57.072 [2024-10-07 12:36:20.291395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.729 ms 00:25:57.072 [2024-10-07 12:36:20.291431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.072 [2024-10-07 12:36:20.311356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.072 [2024-10-07 12:36:20.311527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:57.072 [2024-10-07 12:36:20.311599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.899 ms 00:25:57.072 [2024-10-07 12:36:20.311634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.333 [2024-10-07 12:36:20.462486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.333 [2024-10-07 12:36:20.462664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:57.333 [2024-10-07 12:36:20.462760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 151.023 ms 00:25:57.333 [2024-10-07 12:36:20.462796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.333 [2024-10-07 12:36:20.499327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.333 [2024-10-07 12:36:20.499475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:57.333 [2024-10-07 12:36:20.499557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.546 ms 00:25:57.333 [2024-10-07 12:36:20.499592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.333 [2024-10-07 12:36:20.536871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.333 [2024-10-07 12:36:20.537061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:57.333 [2024-10-07 12:36:20.537215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.234 ms 00:25:57.333 [2024-10-07 12:36:20.537255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.333 [2024-10-07 12:36:20.572039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.333 [2024-10-07 12:36:20.572206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:57.333 [2024-10-07 12:36:20.572337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.784 ms 00:25:57.333 [2024-10-07 12:36:20.572374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.333 [2024-10-07 12:36:20.606908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.333 [2024-10-07 12:36:20.607077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:57.333 [2024-10-07 12:36:20.607182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.481 ms 00:25:57.333 [2024-10-07 12:36:20.607198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.333 [2024-10-07 12:36:20.607298] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:57.333 [2024-10-07 12:36:20.607324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:57.334 [2024-10-07 12:36:20.607337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.607999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:57.334 [2024-10-07 12:36:20.608304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:57.335 [2024-10-07 12:36:20.608314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:57.335 [2024-10-07 12:36:20.608324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:57.335 [2024-10-07 12:36:20.608335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:57.335 [2024-10-07 12:36:20.608345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:57.335 [2024-10-07 12:36:20.608355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:57.335 [2024-10-07 12:36:20.608367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:57.335 [2024-10-07 12:36:20.608377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:57.335 [2024-10-07 12:36:20.608387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:57.335 [2024-10-07 12:36:20.608398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:57.335 [2024-10-07 12:36:20.608416] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:57.335 [2024-10-07 12:36:20.608426] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 639eeb8e-3c5a-4629-a801-135403ce4b53 00:25:57.335 [2024-10-07 12:36:20.608442] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:57.335 [2024-10-07 12:36:20.608452] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 31424 00:25:57.335 [2024-10-07 12:36:20.608461] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 30464 00:25:57.335 [2024-10-07 12:36:20.608472] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0315 00:25:57.335 [2024-10-07 12:36:20.608482] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:57.335 [2024-10-07 12:36:20.608492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:57.335 [2024-10-07 12:36:20.608502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:57.335 [2024-10-07 12:36:20.608512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:57.335 [2024-10-07 12:36:20.608521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:57.335 [2024-10-07 12:36:20.608531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.335 [2024-10-07 12:36:20.608541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:57.335 [2024-10-07 12:36:20.608561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:25:57.335 [2024-10-07 12:36:20.608571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.624 [2024-10-07 12:36:20.627481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.624 [2024-10-07 12:36:20.627517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:57.625 [2024-10-07 12:36:20.627545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.904 ms 00:25:57.625 [2024-10-07 12:36:20.627555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.628104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.625 [2024-10-07 12:36:20.628122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:57.625 [2024-10-07 12:36:20.628138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:25:57.625 [2024-10-07 12:36:20.628148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.671319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.671359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.625 [2024-10-07 12:36:20.671372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.671397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.671450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.671462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.625 [2024-10-07 12:36:20.671472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.671487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.671567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.671581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.625 [2024-10-07 12:36:20.671591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.671600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.671618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.671628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.625 [2024-10-07 12:36:20.671638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.671648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.791115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.791181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.625 [2024-10-07 12:36:20.791196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.791223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.886572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.886649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.625 [2024-10-07 12:36:20.886664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.886680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.886764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.886776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.625 [2024-10-07 12:36:20.886787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.886797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.886849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.886859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.625 [2024-10-07 12:36:20.886869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.886879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.887024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.887040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.625 [2024-10-07 12:36:20.887051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.887061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.887096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.887108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:57.625 [2024-10-07 12:36:20.887119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.887129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.887170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.887189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.625 [2024-10-07 12:36:20.887200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.887210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.887259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-10-07 12:36:20.887271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.625 [2024-10-07 12:36:20.887281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-10-07 12:36:20.887292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-10-07 12:36:20.887409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 655.436 ms, result 0 00:25:59.013 00:25:59.013 00:25:59.013 12:36:22 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:00.919 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:00.919 Process with pid 76760 is not found 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76760 00:26:00.919 12:36:23 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76760 ']' 00:26:00.919 12:36:23 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76760 00:26:00.919 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76760) - No such process 00:26:00.919 12:36:23 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 76760 is not found' 00:26:00.919 Remove shared memory files 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:00.919 12:36:23 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:00.919 00:26:00.919 real 3m20.534s 00:26:00.919 user 3m7.360s 00:26:00.919 sys 0m13.913s 00:26:00.919 12:36:23 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:00.919 ************************************ 00:26:00.919 END TEST ftl_restore 00:26:00.919 ************************************ 00:26:00.919 12:36:23 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:00.919 12:36:23 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:00.919 12:36:23 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:00.919 12:36:23 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:00.919 12:36:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:00.919 ************************************ 00:26:00.919 START TEST ftl_dirty_shutdown 00:26:00.919 ************************************ 00:26:00.919 12:36:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:00.919 * Looking for test storage... 00:26:00.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:00.919 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:00.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.920 --rc genhtml_branch_coverage=1 00:26:00.920 --rc genhtml_function_coverage=1 00:26:00.920 --rc genhtml_legend=1 00:26:00.920 --rc geninfo_all_blocks=1 00:26:00.920 --rc geninfo_unexecuted_blocks=1 00:26:00.920 00:26:00.920 ' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:00.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.920 --rc genhtml_branch_coverage=1 00:26:00.920 --rc genhtml_function_coverage=1 00:26:00.920 --rc genhtml_legend=1 00:26:00.920 --rc geninfo_all_blocks=1 00:26:00.920 --rc geninfo_unexecuted_blocks=1 00:26:00.920 00:26:00.920 ' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:00.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.920 --rc genhtml_branch_coverage=1 00:26:00.920 --rc genhtml_function_coverage=1 00:26:00.920 --rc genhtml_legend=1 00:26:00.920 --rc geninfo_all_blocks=1 00:26:00.920 --rc geninfo_unexecuted_blocks=1 00:26:00.920 00:26:00.920 ' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:00.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.920 --rc genhtml_branch_coverage=1 00:26:00.920 --rc genhtml_function_coverage=1 00:26:00.920 --rc genhtml_legend=1 00:26:00.920 --rc geninfo_all_blocks=1 00:26:00.920 --rc geninfo_unexecuted_blocks=1 00:26:00.920 00:26:00.920 ' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78923 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78923 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78923 ']' 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:00.920 12:36:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:01.179 [2024-10-07 12:36:24.304006] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:01.179 [2024-10-07 12:36:24.304180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78923 ] 00:26:01.438 [2024-10-07 12:36:24.475010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.438 [2024-10-07 12:36:24.687007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.386 12:36:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.386 12:36:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:02.386 12:36:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:02.386 12:36:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:02.386 12:36:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:02.386 12:36:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:02.386 12:36:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:02.386 12:36:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:02.644 12:36:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:02.645 12:36:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:02.645 12:36:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:02.645 12:36:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:26:02.645 12:36:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:02.645 12:36:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:02.645 12:36:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:02.645 12:36:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:02.904 { 00:26:02.904 "name": "nvme0n1", 00:26:02.904 "aliases": [ 00:26:02.904 "40b7fd09-716f-4474-b7f9-f06e9993dab9" 00:26:02.904 ], 00:26:02.904 "product_name": "NVMe disk", 00:26:02.904 "block_size": 4096, 00:26:02.904 "num_blocks": 1310720, 00:26:02.904 "uuid": "40b7fd09-716f-4474-b7f9-f06e9993dab9", 00:26:02.904 "numa_id": -1, 00:26:02.904 "assigned_rate_limits": { 00:26:02.904 "rw_ios_per_sec": 0, 00:26:02.904 "rw_mbytes_per_sec": 0, 00:26:02.904 "r_mbytes_per_sec": 0, 00:26:02.904 "w_mbytes_per_sec": 0 00:26:02.904 }, 00:26:02.904 "claimed": true, 00:26:02.904 "claim_type": "read_many_write_one", 00:26:02.904 "zoned": false, 00:26:02.904 "supported_io_types": { 00:26:02.904 "read": true, 00:26:02.904 "write": true, 00:26:02.904 "unmap": true, 00:26:02.904 "flush": true, 00:26:02.904 "reset": true, 00:26:02.904 "nvme_admin": true, 00:26:02.904 "nvme_io": true, 00:26:02.904 "nvme_io_md": false, 00:26:02.904 "write_zeroes": true, 00:26:02.904 "zcopy": false, 00:26:02.904 "get_zone_info": false, 00:26:02.904 "zone_management": false, 00:26:02.904 "zone_append": false, 00:26:02.904 "compare": true, 00:26:02.904 "compare_and_write": false, 00:26:02.904 "abort": true, 00:26:02.904 "seek_hole": false, 00:26:02.904 "seek_data": false, 00:26:02.904 "copy": true, 00:26:02.904 "nvme_iov_md": false 00:26:02.904 }, 00:26:02.904 "driver_specific": { 00:26:02.904 "nvme": [ 00:26:02.904 { 00:26:02.904 "pci_address": "0000:00:11.0", 00:26:02.904 "trid": { 00:26:02.904 "trtype": "PCIe", 00:26:02.904 "traddr": "0000:00:11.0" 00:26:02.904 }, 00:26:02.904 "ctrlr_data": { 00:26:02.904 "cntlid": 0, 00:26:02.904 "vendor_id": "0x1b36", 00:26:02.904 "model_number": "QEMU NVMe Ctrl", 00:26:02.904 "serial_number": "12341", 00:26:02.904 "firmware_revision": "8.0.0", 00:26:02.904 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:02.904 "oacs": { 00:26:02.904 "security": 0, 00:26:02.904 "format": 1, 00:26:02.904 "firmware": 0, 00:26:02.904 "ns_manage": 1 00:26:02.904 }, 00:26:02.904 "multi_ctrlr": false, 00:26:02.904 "ana_reporting": false 00:26:02.904 }, 00:26:02.904 "vs": { 00:26:02.904 "nvme_version": "1.4" 00:26:02.904 }, 00:26:02.904 "ns_data": { 00:26:02.904 "id": 1, 00:26:02.904 "can_share": false 00:26:02.904 } 00:26:02.904 } 00:26:02.904 ], 00:26:02.904 "mp_policy": "active_passive" 00:26:02.904 } 00:26:02.904 } 00:26:02.904 ]' 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:02.904 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:03.163 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=70db2f1d-583a-4651-86bc-4e3171117211 00:26:03.163 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:03.163 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 70db2f1d-583a-4651-86bc-4e3171117211 00:26:03.422 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:03.681 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=3028ca6c-5bd8-4f23-8f9d-126b1581a918 00:26:03.681 12:36:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3028ca6c-5bd8-4f23-8f9d-126b1581a918 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:03.940 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:04.199 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:04.199 { 00:26:04.199 "name": "736e37e8-4c37-48c0-809d-e51bee4737c7", 00:26:04.199 "aliases": [ 00:26:04.199 "lvs/nvme0n1p0" 00:26:04.199 ], 00:26:04.199 "product_name": "Logical Volume", 00:26:04.199 "block_size": 4096, 00:26:04.199 "num_blocks": 26476544, 00:26:04.199 "uuid": "736e37e8-4c37-48c0-809d-e51bee4737c7", 00:26:04.199 "assigned_rate_limits": { 00:26:04.199 "rw_ios_per_sec": 0, 00:26:04.199 "rw_mbytes_per_sec": 0, 00:26:04.199 "r_mbytes_per_sec": 0, 00:26:04.199 "w_mbytes_per_sec": 0 00:26:04.199 }, 00:26:04.199 "claimed": false, 00:26:04.199 "zoned": false, 00:26:04.199 "supported_io_types": { 00:26:04.199 "read": true, 00:26:04.199 "write": true, 00:26:04.199 "unmap": true, 00:26:04.199 "flush": false, 00:26:04.199 "reset": true, 00:26:04.199 "nvme_admin": false, 00:26:04.199 "nvme_io": false, 00:26:04.199 "nvme_io_md": false, 00:26:04.199 "write_zeroes": true, 00:26:04.199 "zcopy": false, 00:26:04.199 "get_zone_info": false, 00:26:04.199 "zone_management": false, 00:26:04.199 "zone_append": false, 00:26:04.199 "compare": false, 00:26:04.199 "compare_and_write": false, 00:26:04.199 "abort": false, 00:26:04.199 "seek_hole": true, 00:26:04.199 "seek_data": true, 00:26:04.199 "copy": false, 00:26:04.199 "nvme_iov_md": false 00:26:04.199 }, 00:26:04.199 "driver_specific": { 00:26:04.199 "lvol": { 00:26:04.199 "lvol_store_uuid": "3028ca6c-5bd8-4f23-8f9d-126b1581a918", 00:26:04.199 "base_bdev": "nvme0n1", 00:26:04.199 "thin_provision": true, 00:26:04.199 "num_allocated_clusters": 0, 00:26:04.199 "snapshot": false, 00:26:04.199 "clone": false, 00:26:04.199 "esnap_clone": false 00:26:04.199 } 00:26:04.199 } 00:26:04.199 } 00:26:04.199 ]' 00:26:04.199 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:04.199 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:04.199 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:04.199 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:04.199 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:04.199 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:26:04.199 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:04.199 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:04.199 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:04.458 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:04.458 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:04.458 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:04.458 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:04.458 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:04.458 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:04.458 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:04.458 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:04.717 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:04.717 { 00:26:04.717 "name": "736e37e8-4c37-48c0-809d-e51bee4737c7", 00:26:04.717 "aliases": [ 00:26:04.717 "lvs/nvme0n1p0" 00:26:04.717 ], 00:26:04.718 "product_name": "Logical Volume", 00:26:04.718 "block_size": 4096, 00:26:04.718 "num_blocks": 26476544, 00:26:04.718 "uuid": "736e37e8-4c37-48c0-809d-e51bee4737c7", 00:26:04.718 "assigned_rate_limits": { 00:26:04.718 "rw_ios_per_sec": 0, 00:26:04.718 "rw_mbytes_per_sec": 0, 00:26:04.718 "r_mbytes_per_sec": 0, 00:26:04.718 "w_mbytes_per_sec": 0 00:26:04.718 }, 00:26:04.718 "claimed": false, 00:26:04.718 "zoned": false, 00:26:04.718 "supported_io_types": { 00:26:04.718 "read": true, 00:26:04.718 "write": true, 00:26:04.718 "unmap": true, 00:26:04.718 "flush": false, 00:26:04.718 "reset": true, 00:26:04.718 "nvme_admin": false, 00:26:04.718 "nvme_io": false, 00:26:04.718 "nvme_io_md": false, 00:26:04.718 "write_zeroes": true, 00:26:04.718 "zcopy": false, 00:26:04.718 "get_zone_info": false, 00:26:04.718 "zone_management": false, 00:26:04.718 "zone_append": false, 00:26:04.718 "compare": false, 00:26:04.718 "compare_and_write": false, 00:26:04.718 "abort": false, 00:26:04.718 "seek_hole": true, 00:26:04.718 "seek_data": true, 00:26:04.718 "copy": false, 00:26:04.718 "nvme_iov_md": false 00:26:04.718 }, 00:26:04.718 "driver_specific": { 00:26:04.718 "lvol": { 00:26:04.718 "lvol_store_uuid": "3028ca6c-5bd8-4f23-8f9d-126b1581a918", 00:26:04.718 "base_bdev": "nvme0n1", 00:26:04.718 "thin_provision": true, 00:26:04.718 "num_allocated_clusters": 0, 00:26:04.718 "snapshot": false, 00:26:04.718 "clone": false, 00:26:04.718 "esnap_clone": false 00:26:04.718 } 00:26:04.718 } 00:26:04.718 } 00:26:04.718 ]' 00:26:04.718 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:04.718 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:04.718 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:04.718 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:04.718 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:04.718 12:36:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:26:04.718 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:04.718 12:36:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:04.976 12:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:04.976 12:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:04.976 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:04.976 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:04.976 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:04.976 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:04.976 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 736e37e8-4c37-48c0-809d-e51bee4737c7 00:26:04.976 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:04.976 { 00:26:04.976 "name": "736e37e8-4c37-48c0-809d-e51bee4737c7", 00:26:04.977 "aliases": [ 00:26:04.977 "lvs/nvme0n1p0" 00:26:04.977 ], 00:26:04.977 "product_name": "Logical Volume", 00:26:04.977 "block_size": 4096, 00:26:04.977 "num_blocks": 26476544, 00:26:04.977 "uuid": "736e37e8-4c37-48c0-809d-e51bee4737c7", 00:26:04.977 "assigned_rate_limits": { 00:26:04.977 "rw_ios_per_sec": 0, 00:26:04.977 "rw_mbytes_per_sec": 0, 00:26:04.977 "r_mbytes_per_sec": 0, 00:26:04.977 "w_mbytes_per_sec": 0 00:26:04.977 }, 00:26:04.977 "claimed": false, 00:26:04.977 "zoned": false, 00:26:04.977 "supported_io_types": { 00:26:04.977 "read": true, 00:26:04.977 "write": true, 00:26:04.977 "unmap": true, 00:26:04.977 "flush": false, 00:26:04.977 "reset": true, 00:26:04.977 "nvme_admin": false, 00:26:04.977 "nvme_io": false, 00:26:04.977 "nvme_io_md": false, 00:26:04.977 "write_zeroes": true, 00:26:04.977 "zcopy": false, 00:26:04.977 "get_zone_info": false, 00:26:04.977 "zone_management": false, 00:26:04.977 "zone_append": false, 00:26:04.977 "compare": false, 00:26:04.977 "compare_and_write": false, 00:26:04.977 "abort": false, 00:26:04.977 "seek_hole": true, 00:26:04.977 "seek_data": true, 00:26:04.977 "copy": false, 00:26:04.977 "nvme_iov_md": false 00:26:04.977 }, 00:26:04.977 "driver_specific": { 00:26:04.977 "lvol": { 00:26:04.977 "lvol_store_uuid": "3028ca6c-5bd8-4f23-8f9d-126b1581a918", 00:26:04.977 "base_bdev": "nvme0n1", 00:26:04.977 "thin_provision": true, 00:26:04.977 "num_allocated_clusters": 0, 00:26:04.977 "snapshot": false, 00:26:04.977 "clone": false, 00:26:04.977 "esnap_clone": false 00:26:04.977 } 00:26:04.977 } 00:26:04.977 } 00:26:04.977 ]' 00:26:04.977 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:05.246 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:05.246 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:05.246 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:05.246 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:05.246 12:36:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:26:05.246 12:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:05.246 12:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 736e37e8-4c37-48c0-809d-e51bee4737c7 --l2p_dram_limit 10' 00:26:05.246 12:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:05.246 12:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:05.246 12:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:05.247 12:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 736e37e8-4c37-48c0-809d-e51bee4737c7 --l2p_dram_limit 10 -c nvc0n1p0 00:26:05.247 [2024-10-07 12:36:28.505206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.505268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:05.247 [2024-10-07 12:36:28.505304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:05.247 [2024-10-07 12:36:28.505315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.505378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.505390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:05.247 [2024-10-07 12:36:28.505404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:05.247 [2024-10-07 12:36:28.505415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.505448] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:05.247 [2024-10-07 12:36:28.506480] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:05.247 [2024-10-07 12:36:28.506521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.506532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:05.247 [2024-10-07 12:36:28.506546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:26:05.247 [2024-10-07 12:36:28.506559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.506640] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cfa8e74c-67c0-4e91-a7e6-4267d33e5f42 00:26:05.247 [2024-10-07 12:36:28.508108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.508151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:05.247 [2024-10-07 12:36:28.508165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:05.247 [2024-10-07 12:36:28.508178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.515670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.515712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:05.247 [2024-10-07 12:36:28.515724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.452 ms 00:26:05.247 [2024-10-07 12:36:28.515737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.515836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.515853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:05.247 [2024-10-07 12:36:28.515864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:26:05.247 [2024-10-07 12:36:28.515884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.515980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.515998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:05.247 [2024-10-07 12:36:28.516009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:05.247 [2024-10-07 12:36:28.516021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.516047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:05.247 [2024-10-07 12:36:28.521329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.521361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:05.247 [2024-10-07 12:36:28.521375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.295 ms 00:26:05.247 [2024-10-07 12:36:28.521386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.521440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.521451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:05.247 [2024-10-07 12:36:28.521464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:05.247 [2024-10-07 12:36:28.521478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.521526] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:05.247 [2024-10-07 12:36:28.521673] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:05.247 [2024-10-07 12:36:28.521697] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:05.247 [2024-10-07 12:36:28.521711] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:05.247 [2024-10-07 12:36:28.521730] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:05.247 [2024-10-07 12:36:28.521742] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:05.247 [2024-10-07 12:36:28.521756] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:05.247 [2024-10-07 12:36:28.521766] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:05.247 [2024-10-07 12:36:28.521778] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:05.247 [2024-10-07 12:36:28.521789] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:05.247 [2024-10-07 12:36:28.521803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.521822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:05.247 [2024-10-07 12:36:28.521836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:26:05.247 [2024-10-07 12:36:28.521846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.521935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.247 [2024-10-07 12:36:28.521951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:05.247 [2024-10-07 12:36:28.521964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:05.247 [2024-10-07 12:36:28.521974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.247 [2024-10-07 12:36:28.522062] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:05.247 [2024-10-07 12:36:28.522074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:05.247 [2024-10-07 12:36:28.522087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.248 [2024-10-07 12:36:28.522098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:05.248 [2024-10-07 12:36:28.522120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:05.248 [2024-10-07 12:36:28.522141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:05.248 [2024-10-07 12:36:28.522153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.248 [2024-10-07 12:36:28.522174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:05.248 [2024-10-07 12:36:28.522183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:05.248 [2024-10-07 12:36:28.522195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.248 [2024-10-07 12:36:28.522205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:05.248 [2024-10-07 12:36:28.522216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:05.248 [2024-10-07 12:36:28.522225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:05.248 [2024-10-07 12:36:28.522248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:05.248 [2024-10-07 12:36:28.522260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:05.248 [2024-10-07 12:36:28.522281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.248 [2024-10-07 12:36:28.522304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:05.248 [2024-10-07 12:36:28.522314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.248 [2024-10-07 12:36:28.522334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:05.248 [2024-10-07 12:36:28.522346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.248 [2024-10-07 12:36:28.522366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:05.248 [2024-10-07 12:36:28.522376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.248 [2024-10-07 12:36:28.522397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:05.248 [2024-10-07 12:36:28.522411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.248 [2024-10-07 12:36:28.522432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:05.248 [2024-10-07 12:36:28.522442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:05.248 [2024-10-07 12:36:28.522453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.248 [2024-10-07 12:36:28.522462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:05.248 [2024-10-07 12:36:28.522474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:05.248 [2024-10-07 12:36:28.522483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:05.248 [2024-10-07 12:36:28.522504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:05.248 [2024-10-07 12:36:28.522515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522524] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:05.248 [2024-10-07 12:36:28.522537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:05.248 [2024-10-07 12:36:28.522549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.248 [2024-10-07 12:36:28.522561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.248 [2024-10-07 12:36:28.522572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:05.248 [2024-10-07 12:36:28.522587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:05.248 [2024-10-07 12:36:28.522596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:05.248 [2024-10-07 12:36:28.522608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:05.248 [2024-10-07 12:36:28.522617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:05.248 [2024-10-07 12:36:28.522629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:05.248 [2024-10-07 12:36:28.522643] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:05.248 [2024-10-07 12:36:28.522658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.248 [2024-10-07 12:36:28.522670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:05.248 [2024-10-07 12:36:28.522683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:05.248 [2024-10-07 12:36:28.522693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:05.248 [2024-10-07 12:36:28.522705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:05.248 [2024-10-07 12:36:28.522716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:05.248 [2024-10-07 12:36:28.522729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:05.248 [2024-10-07 12:36:28.522740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:05.248 [2024-10-07 12:36:28.522752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:05.248 [2024-10-07 12:36:28.522763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:05.248 [2024-10-07 12:36:28.522779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:05.248 [2024-10-07 12:36:28.522790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:05.248 [2024-10-07 12:36:28.522802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:05.248 [2024-10-07 12:36:28.522812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:05.249 [2024-10-07 12:36:28.522825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:05.249 [2024-10-07 12:36:28.522835] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:05.249 [2024-10-07 12:36:28.522848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.249 [2024-10-07 12:36:28.522859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:05.249 [2024-10-07 12:36:28.522873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:05.249 [2024-10-07 12:36:28.522883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:05.249 [2024-10-07 12:36:28.522895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:05.249 [2024-10-07 12:36:28.522915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.249 [2024-10-07 12:36:28.522928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:05.249 [2024-10-07 12:36:28.522938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:26:05.249 [2024-10-07 12:36:28.522950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.249 [2024-10-07 12:36:28.523003] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:05.249 [2024-10-07 12:36:28.523021] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:09.441 [2024-10-07 12:36:32.217969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.441 [2024-10-07 12:36:32.218059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:09.441 [2024-10-07 12:36:32.218076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3700.963 ms 00:26:09.441 [2024-10-07 12:36:32.218089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.441 [2024-10-07 12:36:32.253884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.441 [2024-10-07 12:36:32.253967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:09.441 [2024-10-07 12:36:32.253983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.436 ms 00:26:09.441 [2024-10-07 12:36:32.253996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.441 [2024-10-07 12:36:32.254140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.441 [2024-10-07 12:36:32.254156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:09.442 [2024-10-07 12:36:32.254168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:09.442 [2024-10-07 12:36:32.254186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.309411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.309472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:09.442 [2024-10-07 12:36:32.309494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.247 ms 00:26:09.442 [2024-10-07 12:36:32.309512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.309560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.309577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:09.442 [2024-10-07 12:36:32.309591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:09.442 [2024-10-07 12:36:32.309618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.310215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.310270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:09.442 [2024-10-07 12:36:32.310291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:26:09.442 [2024-10-07 12:36:32.310324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.310506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.310534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:09.442 [2024-10-07 12:36:32.310547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:26:09.442 [2024-10-07 12:36:32.310574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.331437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.331486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:09.442 [2024-10-07 12:36:32.331502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.854 ms 00:26:09.442 [2024-10-07 12:36:32.331534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.343297] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:09.442 [2024-10-07 12:36:32.346586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.346616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:09.442 [2024-10-07 12:36:32.346651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.972 ms 00:26:09.442 [2024-10-07 12:36:32.346661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.441516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.441600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:09.442 [2024-10-07 12:36:32.441623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.968 ms 00:26:09.442 [2024-10-07 12:36:32.441634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.441820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.441833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:09.442 [2024-10-07 12:36:32.441849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:26:09.442 [2024-10-07 12:36:32.441859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.475744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.475792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:09.442 [2024-10-07 12:36:32.475809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.864 ms 00:26:09.442 [2024-10-07 12:36:32.475819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.510682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.510734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:09.442 [2024-10-07 12:36:32.510752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.853 ms 00:26:09.442 [2024-10-07 12:36:32.510762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.511625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.511662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:09.442 [2024-10-07 12:36:32.511678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:26:09.442 [2024-10-07 12:36:32.511689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.612134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.612190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:09.442 [2024-10-07 12:36:32.612212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.543 ms 00:26:09.442 [2024-10-07 12:36:32.612227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.649468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.649516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:09.442 [2024-10-07 12:36:32.649535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.216 ms 00:26:09.442 [2024-10-07 12:36:32.649546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.687386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.687438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:09.442 [2024-10-07 12:36:32.687456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.852 ms 00:26:09.442 [2024-10-07 12:36:32.687482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.722837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.722877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:09.442 [2024-10-07 12:36:32.722910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.364 ms 00:26:09.442 [2024-10-07 12:36:32.722927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.722982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.722994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:09.442 [2024-10-07 12:36:32.723014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:09.442 [2024-10-07 12:36:32.723024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.723126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.442 [2024-10-07 12:36:32.723138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:09.442 [2024-10-07 12:36:32.723151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:09.442 [2024-10-07 12:36:32.723161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.442 [2024-10-07 12:36:32.724286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4225.460 ms, result 0 00:26:09.442 { 00:26:09.442 "name": "ftl0", 00:26:09.442 "uuid": "cfa8e74c-67c0-4e91-a7e6-4267d33e5f42" 00:26:09.442 } 00:26:09.701 12:36:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:09.701 12:36:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:09.701 12:36:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:09.701 12:36:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:09.701 12:36:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:09.960 /dev/nbd0 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:09.960 1+0 records in 00:26:09.960 1+0 records out 00:26:09.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331979 s, 12.3 MB/s 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:26:09.960 12:36:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:10.218 [2024-10-07 12:36:33.299653] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:10.218 [2024-10-07 12:36:33.299791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79076 ] 00:26:10.218 [2024-10-07 12:36:33.468064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.477 [2024-10-07 12:36:33.686631] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.854  [2024-10-07T12:36:36.080Z] Copying: 205/1024 [MB] (205 MBps) [2024-10-07T12:36:37.015Z] Copying: 412/1024 [MB] (206 MBps) [2024-10-07T12:36:38.391Z] Copying: 619/1024 [MB] (207 MBps) [2024-10-07T12:36:39.326Z] Copying: 821/1024 [MB] (201 MBps) [2024-10-07T12:36:39.326Z] Copying: 1013/1024 [MB] (192 MBps) [2024-10-07T12:36:40.714Z] Copying: 1024/1024 [MB] (average 202 MBps) 00:26:17.423 00:26:17.423 12:36:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:18.800 12:36:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:19.058 [2024-10-07 12:36:42.096286] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:19.058 [2024-10-07 12:36:42.096429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79163 ] 00:26:19.058 [2024-10-07 12:36:42.267344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.317 [2024-10-07 12:36:42.484503] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.693  [2024-10-07T12:36:44.923Z] Copying: 16/1024 [MB] (16 MBps) [2024-10-07T12:36:45.858Z] Copying: 29/1024 [MB] (12 MBps) [2024-10-07T12:36:47.234Z] Copying: 46/1024 [MB] (16 MBps) [2024-10-07T12:36:47.800Z] Copying: 63/1024 [MB] (17 MBps) [2024-10-07T12:36:49.174Z] Copying: 80/1024 [MB] (16 MBps) [2024-10-07T12:36:50.109Z] Copying: 97/1024 [MB] (16 MBps) [2024-10-07T12:36:51.044Z] Copying: 114/1024 [MB] (16 MBps) [2024-10-07T12:36:51.979Z] Copying: 131/1024 [MB] (17 MBps) [2024-10-07T12:36:52.914Z] Copying: 148/1024 [MB] (17 MBps) [2024-10-07T12:36:53.847Z] Copying: 165/1024 [MB] (17 MBps) [2024-10-07T12:36:54.813Z] Copying: 182/1024 [MB] (16 MBps) [2024-10-07T12:36:56.194Z] Copying: 199/1024 [MB] (16 MBps) [2024-10-07T12:36:57.129Z] Copying: 216/1024 [MB] (17 MBps) [2024-10-07T12:36:58.065Z] Copying: 232/1024 [MB] (16 MBps) [2024-10-07T12:36:58.999Z] Copying: 249/1024 [MB] (16 MBps) [2024-10-07T12:36:59.934Z] Copying: 266/1024 [MB] (16 MBps) [2024-10-07T12:37:00.870Z] Copying: 283/1024 [MB] (16 MBps) [2024-10-07T12:37:01.805Z] Copying: 299/1024 [MB] (16 MBps) [2024-10-07T12:37:03.181Z] Copying: 316/1024 [MB] (16 MBps) [2024-10-07T12:37:04.117Z] Copying: 333/1024 [MB] (16 MBps) [2024-10-07T12:37:05.052Z] Copying: 350/1024 [MB] (17 MBps) [2024-10-07T12:37:05.986Z] Copying: 366/1024 [MB] (16 MBps) [2024-10-07T12:37:06.921Z] Copying: 383/1024 [MB] (16 MBps) [2024-10-07T12:37:07.857Z] Copying: 399/1024 [MB] (16 MBps) [2024-10-07T12:37:08.793Z] Copying: 416/1024 [MB] (16 MBps) [2024-10-07T12:37:10.169Z] Copying: 432/1024 [MB] (16 MBps) [2024-10-07T12:37:11.106Z] Copying: 448/1024 [MB] (15 MBps) [2024-10-07T12:37:12.043Z] Copying: 464/1024 [MB] (16 MBps) [2024-10-07T12:37:12.981Z] Copying: 481/1024 [MB] (16 MBps) [2024-10-07T12:37:13.967Z] Copying: 497/1024 [MB] (16 MBps) [2024-10-07T12:37:14.907Z] Copying: 514/1024 [MB] (16 MBps) [2024-10-07T12:37:15.845Z] Copying: 530/1024 [MB] (16 MBps) [2024-10-07T12:37:16.783Z] Copying: 547/1024 [MB] (16 MBps) [2024-10-07T12:37:18.161Z] Copying: 563/1024 [MB] (16 MBps) [2024-10-07T12:37:19.098Z] Copying: 579/1024 [MB] (16 MBps) [2024-10-07T12:37:20.035Z] Copying: 595/1024 [MB] (15 MBps) [2024-10-07T12:37:20.972Z] Copying: 611/1024 [MB] (16 MBps) [2024-10-07T12:37:21.910Z] Copying: 628/1024 [MB] (16 MBps) [2024-10-07T12:37:22.850Z] Copying: 645/1024 [MB] (17 MBps) [2024-10-07T12:37:23.789Z] Copying: 661/1024 [MB] (16 MBps) [2024-10-07T12:37:25.171Z] Copying: 678/1024 [MB] (16 MBps) [2024-10-07T12:37:25.740Z] Copying: 695/1024 [MB] (16 MBps) [2024-10-07T12:37:27.121Z] Copying: 711/1024 [MB] (16 MBps) [2024-10-07T12:37:28.057Z] Copying: 728/1024 [MB] (16 MBps) [2024-10-07T12:37:28.994Z] Copying: 745/1024 [MB] (16 MBps) [2024-10-07T12:37:29.931Z] Copying: 761/1024 [MB] (16 MBps) [2024-10-07T12:37:30.881Z] Copying: 777/1024 [MB] (16 MBps) [2024-10-07T12:37:31.819Z] Copying: 794/1024 [MB] (16 MBps) [2024-10-07T12:37:32.755Z] Copying: 810/1024 [MB] (16 MBps) [2024-10-07T12:37:34.136Z] Copying: 827/1024 [MB] (16 MBps) [2024-10-07T12:37:35.075Z] Copying: 843/1024 [MB] (16 MBps) [2024-10-07T12:37:36.015Z] Copying: 860/1024 [MB] (16 MBps) [2024-10-07T12:37:36.954Z] Copying: 877/1024 [MB] (16 MBps) [2024-10-07T12:37:37.894Z] Copying: 893/1024 [MB] (16 MBps) [2024-10-07T12:37:38.840Z] Copying: 910/1024 [MB] (16 MBps) [2024-10-07T12:37:39.778Z] Copying: 927/1024 [MB] (16 MBps) [2024-10-07T12:37:40.714Z] Copying: 944/1024 [MB] (17 MBps) [2024-10-07T12:37:42.092Z] Copying: 960/1024 [MB] (16 MBps) [2024-10-07T12:37:43.028Z] Copying: 977/1024 [MB] (16 MBps) [2024-10-07T12:37:43.964Z] Copying: 993/1024 [MB] (16 MBps) [2024-10-07T12:37:44.532Z] Copying: 1010/1024 [MB] (16 MBps) [2024-10-07T12:37:45.910Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:27:22.619 00:27:22.619 12:37:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:22.619 12:37:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:22.878 12:37:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:23.138 [2024-10-07 12:37:46.172945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.173256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:23.138 [2024-10-07 12:37:46.173298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:23.138 [2024-10-07 12:37:46.173313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.138 [2024-10-07 12:37:46.173351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:23.138 [2024-10-07 12:37:46.177665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.177697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:23.138 [2024-10-07 12:37:46.177713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.299 ms 00:27:23.138 [2024-10-07 12:37:46.177723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.138 [2024-10-07 12:37:46.179858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.179910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:23.138 [2024-10-07 12:37:46.179927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:27:23.138 [2024-10-07 12:37:46.179938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.138 [2024-10-07 12:37:46.197426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.197574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:23.138 [2024-10-07 12:37:46.197602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.489 ms 00:27:23.138 [2024-10-07 12:37:46.197629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.138 [2024-10-07 12:37:46.202496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.202527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:23.138 [2024-10-07 12:37:46.202545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:27:23.138 [2024-10-07 12:37:46.202555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.138 [2024-10-07 12:37:46.238404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.238441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:23.138 [2024-10-07 12:37:46.238458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.827 ms 00:27:23.138 [2024-10-07 12:37:46.238484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.138 [2024-10-07 12:37:46.260195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.260362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:23.138 [2024-10-07 12:37:46.260405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.699 ms 00:27:23.138 [2024-10-07 12:37:46.260416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.138 [2024-10-07 12:37:46.260596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.260611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:23.138 [2024-10-07 12:37:46.260629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:27:23.138 [2024-10-07 12:37:46.260639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.138 [2024-10-07 12:37:46.296054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.296090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:23.138 [2024-10-07 12:37:46.296106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.451 ms 00:27:23.138 [2024-10-07 12:37:46.296131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.138 [2024-10-07 12:37:46.330753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.330790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:23.138 [2024-10-07 12:37:46.330805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.636 ms 00:27:23.138 [2024-10-07 12:37:46.330815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.138 [2024-10-07 12:37:46.365553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.138 [2024-10-07 12:37:46.365708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:23.139 [2024-10-07 12:37:46.365733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.749 ms 00:27:23.139 [2024-10-07 12:37:46.365759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.139 [2024-10-07 12:37:46.399741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.139 [2024-10-07 12:37:46.399778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:23.139 [2024-10-07 12:37:46.399795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.918 ms 00:27:23.139 [2024-10-07 12:37:46.399821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.139 [2024-10-07 12:37:46.399863] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:23.139 [2024-10-07 12:37:46.399880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.399896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.399929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.399943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.399954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.399974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.399985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:23.139 [2024-10-07 12:37:46.400987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.400998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:23.140 [2024-10-07 12:37:46.401177] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:23.140 [2024-10-07 12:37:46.401193] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cfa8e74c-67c0-4e91-a7e6-4267d33e5f42 00:27:23.140 [2024-10-07 12:37:46.401204] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:23.140 [2024-10-07 12:37:46.401219] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:23.140 [2024-10-07 12:37:46.401228] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:23.140 [2024-10-07 12:37:46.401241] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:23.140 [2024-10-07 12:37:46.401251] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:23.140 [2024-10-07 12:37:46.401264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:23.140 [2024-10-07 12:37:46.401274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:23.140 [2024-10-07 12:37:46.401285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:23.140 [2024-10-07 12:37:46.401294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:23.140 [2024-10-07 12:37:46.401319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.140 [2024-10-07 12:37:46.401330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:23.140 [2024-10-07 12:37:46.401343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:27:23.140 [2024-10-07 12:37:46.401354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.140 [2024-10-07 12:37:46.421124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.140 [2024-10-07 12:37:46.421159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:23.140 [2024-10-07 12:37:46.421174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.742 ms 00:27:23.140 [2024-10-07 12:37:46.421200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.140 [2024-10-07 12:37:46.421684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.140 [2024-10-07 12:37:46.421699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:23.140 [2024-10-07 12:37:46.421713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:27:23.140 [2024-10-07 12:37:46.421726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.399 [2024-10-07 12:37:46.478510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.399 [2024-10-07 12:37:46.478709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:23.399 [2024-10-07 12:37:46.478735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.399 [2024-10-07 12:37:46.478746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.399 [2024-10-07 12:37:46.478816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.399 [2024-10-07 12:37:46.478828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:23.399 [2024-10-07 12:37:46.478841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.399 [2024-10-07 12:37:46.478854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.399 [2024-10-07 12:37:46.478976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.399 [2024-10-07 12:37:46.478991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:23.399 [2024-10-07 12:37:46.479005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.399 [2024-10-07 12:37:46.479015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.399 [2024-10-07 12:37:46.479041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.399 [2024-10-07 12:37:46.479052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:23.399 [2024-10-07 12:37:46.479065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.399 [2024-10-07 12:37:46.479075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.399 [2024-10-07 12:37:46.602858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.399 [2024-10-07 12:37:46.603074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:23.399 [2024-10-07 12:37:46.603104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.399 [2024-10-07 12:37:46.603115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.658 [2024-10-07 12:37:46.703653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.658 [2024-10-07 12:37:46.703711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:23.658 [2024-10-07 12:37:46.703729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.658 [2024-10-07 12:37:46.703758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.658 [2024-10-07 12:37:46.703876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.658 [2024-10-07 12:37:46.703889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:23.658 [2024-10-07 12:37:46.703903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.658 [2024-10-07 12:37:46.703913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.658 [2024-10-07 12:37:46.704011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.658 [2024-10-07 12:37:46.704025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:23.658 [2024-10-07 12:37:46.704038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.658 [2024-10-07 12:37:46.704049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.658 [2024-10-07 12:37:46.704177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.658 [2024-10-07 12:37:46.704191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:23.658 [2024-10-07 12:37:46.704204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.658 [2024-10-07 12:37:46.704214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.658 [2024-10-07 12:37:46.704254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.658 [2024-10-07 12:37:46.704267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:23.658 [2024-10-07 12:37:46.704280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.658 [2024-10-07 12:37:46.704289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.658 [2024-10-07 12:37:46.704333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.658 [2024-10-07 12:37:46.704344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:23.658 [2024-10-07 12:37:46.704357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.658 [2024-10-07 12:37:46.704367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.658 [2024-10-07 12:37:46.704418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.658 [2024-10-07 12:37:46.704430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:23.658 [2024-10-07 12:37:46.704443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.658 [2024-10-07 12:37:46.704452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.658 [2024-10-07 12:37:46.704584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.469 ms, result 0 00:27:23.658 true 00:27:23.658 12:37:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78923 00:27:23.658 12:37:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78923 00:27:23.659 12:37:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:23.659 [2024-10-07 12:37:46.819704] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:27:23.659 [2024-10-07 12:37:46.819829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79825 ] 00:27:23.918 [2024-10-07 12:37:46.991639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.918 [2024-10-07 12:37:47.203037] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.293  [2024-10-07T12:37:49.553Z] Copying: 211/1024 [MB] (211 MBps) [2024-10-07T12:37:50.925Z] Copying: 424/1024 [MB] (212 MBps) [2024-10-07T12:37:51.859Z] Copying: 638/1024 [MB] (214 MBps) [2024-10-07T12:37:52.425Z] Copying: 849/1024 [MB] (210 MBps) [2024-10-07T12:37:53.801Z] Copying: 1024/1024 [MB] (average 212 MBps) 00:27:30.510 00:27:30.510 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78923 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:30.510 12:37:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:30.510 [2024-10-07 12:37:53.637303] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:27:30.510 [2024-10-07 12:37:53.637428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79895 ] 00:27:30.769 [2024-10-07 12:37:53.808168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.769 [2024-10-07 12:37:54.006430] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.335 [2024-10-07 12:37:54.349972] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:31.336 [2024-10-07 12:37:54.350291] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:31.336 [2024-10-07 12:37:54.415965] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:31.336 [2024-10-07 12:37:54.416417] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:31.336 [2024-10-07 12:37:54.416639] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:31.595 [2024-10-07 12:37:54.733970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.595 [2024-10-07 12:37:54.734224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:31.595 [2024-10-07 12:37:54.734263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:31.595 [2024-10-07 12:37:54.734275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.595 [2024-10-07 12:37:54.734333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.595 [2024-10-07 12:37:54.734345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:31.595 [2024-10-07 12:37:54.734356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:31.595 [2024-10-07 12:37:54.734370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.595 [2024-10-07 12:37:54.734391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:31.595 [2024-10-07 12:37:54.735431] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:31.595 [2024-10-07 12:37:54.735462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.595 [2024-10-07 12:37:54.735473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:31.595 [2024-10-07 12:37:54.735485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:27:31.595 [2024-10-07 12:37:54.735495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.595 [2024-10-07 12:37:54.736935] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:31.595 [2024-10-07 12:37:54.755437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.595 [2024-10-07 12:37:54.755475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:31.595 [2024-10-07 12:37:54.755490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.533 ms 00:27:31.595 [2024-10-07 12:37:54.755516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.595 [2024-10-07 12:37:54.755573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.595 [2024-10-07 12:37:54.755589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:31.595 [2024-10-07 12:37:54.755600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:31.595 [2024-10-07 12:37:54.755610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.595 [2024-10-07 12:37:54.762348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.596 [2024-10-07 12:37:54.762376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:31.596 [2024-10-07 12:37:54.762388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.678 ms 00:27:31.596 [2024-10-07 12:37:54.762397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.596 [2024-10-07 12:37:54.762490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.596 [2024-10-07 12:37:54.762504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:31.596 [2024-10-07 12:37:54.762514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:31.596 [2024-10-07 12:37:54.762524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.596 [2024-10-07 12:37:54.762565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.596 [2024-10-07 12:37:54.762577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:31.596 [2024-10-07 12:37:54.762588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:31.596 [2024-10-07 12:37:54.762598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.596 [2024-10-07 12:37:54.762622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:31.596 [2024-10-07 12:37:54.767289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.596 [2024-10-07 12:37:54.767322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:31.596 [2024-10-07 12:37:54.767335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.681 ms 00:27:31.596 [2024-10-07 12:37:54.767345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.596 [2024-10-07 12:37:54.767377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.596 [2024-10-07 12:37:54.767387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:31.596 [2024-10-07 12:37:54.767397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:31.596 [2024-10-07 12:37:54.767407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.596 [2024-10-07 12:37:54.767459] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:31.596 [2024-10-07 12:37:54.767480] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:31.596 [2024-10-07 12:37:54.767514] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:31.596 [2024-10-07 12:37:54.767533] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:31.596 [2024-10-07 12:37:54.767642] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:31.596 [2024-10-07 12:37:54.767661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:31.596 [2024-10-07 12:37:54.767673] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:31.596 [2024-10-07 12:37:54.767686] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:31.596 [2024-10-07 12:37:54.767698] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:31.596 [2024-10-07 12:37:54.767709] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:31.596 [2024-10-07 12:37:54.767719] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:31.596 [2024-10-07 12:37:54.767729] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:31.596 [2024-10-07 12:37:54.767738] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:31.596 [2024-10-07 12:37:54.767749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.596 [2024-10-07 12:37:54.767764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:31.596 [2024-10-07 12:37:54.767774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:27:31.596 [2024-10-07 12:37:54.767784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.596 [2024-10-07 12:37:54.767857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.596 [2024-10-07 12:37:54.767867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:31.596 [2024-10-07 12:37:54.767878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:31.596 [2024-10-07 12:37:54.767888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.596 [2024-10-07 12:37:54.768032] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:31.596 [2024-10-07 12:37:54.768048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:31.596 [2024-10-07 12:37:54.768062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:31.596 [2024-10-07 12:37:54.768072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:31.596 [2024-10-07 12:37:54.768092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:31.596 [2024-10-07 12:37:54.768113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:31.596 [2024-10-07 12:37:54.768122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:31.596 [2024-10-07 12:37:54.768166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:31.596 [2024-10-07 12:37:54.768175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:31.596 [2024-10-07 12:37:54.768184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:31.596 [2024-10-07 12:37:54.768193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:31.596 [2024-10-07 12:37:54.768202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:31.596 [2024-10-07 12:37:54.768211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:31.596 [2024-10-07 12:37:54.768230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:31.596 [2024-10-07 12:37:54.768239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:31.596 [2024-10-07 12:37:54.768257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.596 [2024-10-07 12:37:54.768274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:31.596 [2024-10-07 12:37:54.768283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.596 [2024-10-07 12:37:54.768300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:31.596 [2024-10-07 12:37:54.768309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.596 [2024-10-07 12:37:54.768326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:31.596 [2024-10-07 12:37:54.768335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.596 [2024-10-07 12:37:54.768352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:31.596 [2024-10-07 12:37:54.768377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:31.596 [2024-10-07 12:37:54.768395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:31.596 [2024-10-07 12:37:54.768404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:31.596 [2024-10-07 12:37:54.768413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:31.596 [2024-10-07 12:37:54.768422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:31.596 [2024-10-07 12:37:54.768430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:31.596 [2024-10-07 12:37:54.768439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:31.596 [2024-10-07 12:37:54.768457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:31.596 [2024-10-07 12:37:54.768468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768477] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:31.596 [2024-10-07 12:37:54.768487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:31.596 [2024-10-07 12:37:54.768496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:31.596 [2024-10-07 12:37:54.768506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.596 [2024-10-07 12:37:54.768516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:31.596 [2024-10-07 12:37:54.768525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:31.596 [2024-10-07 12:37:54.768534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:31.596 [2024-10-07 12:37:54.768544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:31.596 [2024-10-07 12:37:54.768552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:31.596 [2024-10-07 12:37:54.768562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:31.596 [2024-10-07 12:37:54.768572] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:31.596 [2024-10-07 12:37:54.768584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.596 [2024-10-07 12:37:54.768596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:31.596 [2024-10-07 12:37:54.768606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:31.596 [2024-10-07 12:37:54.768616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:31.596 [2024-10-07 12:37:54.768626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:31.596 [2024-10-07 12:37:54.768636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:31.596 [2024-10-07 12:37:54.768646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:31.596 [2024-10-07 12:37:54.768656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:31.596 [2024-10-07 12:37:54.768666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:31.596 [2024-10-07 12:37:54.768676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:31.596 [2024-10-07 12:37:54.768686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:31.596 [2024-10-07 12:37:54.768696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:31.596 [2024-10-07 12:37:54.768706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:31.597 [2024-10-07 12:37:54.768717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:31.597 [2024-10-07 12:37:54.768727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:31.597 [2024-10-07 12:37:54.768737] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:31.597 [2024-10-07 12:37:54.768748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.597 [2024-10-07 12:37:54.768762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:31.597 [2024-10-07 12:37:54.768773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:31.597 [2024-10-07 12:37:54.768783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:31.597 [2024-10-07 12:37:54.768795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:31.597 [2024-10-07 12:37:54.768806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.597 [2024-10-07 12:37:54.768816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:31.597 [2024-10-07 12:37:54.768826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:27:31.597 [2024-10-07 12:37:54.768836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.597 [2024-10-07 12:37:54.819929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.597 [2024-10-07 12:37:54.819980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:31.597 [2024-10-07 12:37:54.819997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.128 ms 00:27:31.597 [2024-10-07 12:37:54.820008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.597 [2024-10-07 12:37:54.820103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.597 [2024-10-07 12:37:54.820114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:31.597 [2024-10-07 12:37:54.820125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:31.597 [2024-10-07 12:37:54.820135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.597 [2024-10-07 12:37:54.866719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.597 [2024-10-07 12:37:54.866761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.597 [2024-10-07 12:37:54.866777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.573 ms 00:27:31.597 [2024-10-07 12:37:54.866788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.597 [2024-10-07 12:37:54.866845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.597 [2024-10-07 12:37:54.866856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.597 [2024-10-07 12:37:54.866867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:31.597 [2024-10-07 12:37:54.866877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.597 [2024-10-07 12:37:54.867403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.597 [2024-10-07 12:37:54.867418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.597 [2024-10-07 12:37:54.867429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:27:31.597 [2024-10-07 12:37:54.867439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.597 [2024-10-07 12:37:54.867572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.597 [2024-10-07 12:37:54.867591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.597 [2024-10-07 12:37:54.867602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:27:31.597 [2024-10-07 12:37:54.867611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.856 [2024-10-07 12:37:54.886819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.856 [2024-10-07 12:37:54.886857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.856 [2024-10-07 12:37:54.886872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.217 ms 00:27:31.856 [2024-10-07 12:37:54.886884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.856 [2024-10-07 12:37:54.906291] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:31.856 [2024-10-07 12:37:54.906337] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:31.856 [2024-10-07 12:37:54.906353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.856 [2024-10-07 12:37:54.906364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:31.856 [2024-10-07 12:37:54.906376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.346 ms 00:27:31.856 [2024-10-07 12:37:54.906387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.856 [2024-10-07 12:37:54.936502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.856 [2024-10-07 12:37:54.936676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:31.856 [2024-10-07 12:37:54.936708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.113 ms 00:27:31.856 [2024-10-07 12:37:54.936720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.856 [2024-10-07 12:37:54.955472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.856 [2024-10-07 12:37:54.955518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:31.856 [2024-10-07 12:37:54.955532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.680 ms 00:27:31.856 [2024-10-07 12:37:54.955543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.856 [2024-10-07 12:37:54.973583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.856 [2024-10-07 12:37:54.973760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:31.856 [2024-10-07 12:37:54.973783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.985 ms 00:27:31.856 [2024-10-07 12:37:54.973795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.857 [2024-10-07 12:37:54.974651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.857 [2024-10-07 12:37:54.974676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:31.857 [2024-10-07 12:37:54.974689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:27:31.857 [2024-10-07 12:37:54.974700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.857 [2024-10-07 12:37:55.063131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.857 [2024-10-07 12:37:55.063196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:31.857 [2024-10-07 12:37:55.063214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.540 ms 00:27:31.857 [2024-10-07 12:37:55.063225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.857 [2024-10-07 12:37:55.075735] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:31.857 [2024-10-07 12:37:55.078991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.857 [2024-10-07 12:37:55.079020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:31.857 [2024-10-07 12:37:55.079034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.722 ms 00:27:31.857 [2024-10-07 12:37:55.079044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.857 [2024-10-07 12:37:55.079156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.857 [2024-10-07 12:37:55.079170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:31.857 [2024-10-07 12:37:55.079181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:31.857 [2024-10-07 12:37:55.079191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.857 [2024-10-07 12:37:55.079316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.857 [2024-10-07 12:37:55.079336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:31.857 [2024-10-07 12:37:55.079348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:31.857 [2024-10-07 12:37:55.079358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.857 [2024-10-07 12:37:55.079389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.857 [2024-10-07 12:37:55.079400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:31.857 [2024-10-07 12:37:55.079410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:31.857 [2024-10-07 12:37:55.079420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.857 [2024-10-07 12:37:55.079477] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:31.857 [2024-10-07 12:37:55.079489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.857 [2024-10-07 12:37:55.079504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:31.857 [2024-10-07 12:37:55.079515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:31.857 [2024-10-07 12:37:55.079526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.857 [2024-10-07 12:37:55.116582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.857 [2024-10-07 12:37:55.116640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:31.857 [2024-10-07 12:37:55.116659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.093 ms 00:27:31.857 [2024-10-07 12:37:55.116670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.857 [2024-10-07 12:37:55.116760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.857 [2024-10-07 12:37:55.116773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:31.857 [2024-10-07 12:37:55.116785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:31.857 [2024-10-07 12:37:55.116795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.857 [2024-10-07 12:37:55.117878] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.083 ms, result 0 00:27:33.234  [2024-10-07T12:37:57.462Z] Copying: 24/1024 [MB] (24 MBps) [2024-10-07T12:37:58.398Z] Copying: 48/1024 [MB] (23 MBps) [2024-10-07T12:37:59.333Z] Copying: 72/1024 [MB] (23 MBps) [2024-10-07T12:38:00.271Z] Copying: 96/1024 [MB] (24 MBps) [2024-10-07T12:38:01.218Z] Copying: 120/1024 [MB] (23 MBps) [2024-10-07T12:38:02.155Z] Copying: 143/1024 [MB] (23 MBps) [2024-10-07T12:38:03.534Z] Copying: 168/1024 [MB] (24 MBps) [2024-10-07T12:38:04.479Z] Copying: 192/1024 [MB] (23 MBps) [2024-10-07T12:38:05.418Z] Copying: 215/1024 [MB] (23 MBps) [2024-10-07T12:38:06.357Z] Copying: 239/1024 [MB] (23 MBps) [2024-10-07T12:38:07.340Z] Copying: 264/1024 [MB] (24 MBps) [2024-10-07T12:38:08.277Z] Copying: 288/1024 [MB] (24 MBps) [2024-10-07T12:38:09.215Z] Copying: 312/1024 [MB] (24 MBps) [2024-10-07T12:38:10.153Z] Copying: 337/1024 [MB] (24 MBps) [2024-10-07T12:38:11.532Z] Copying: 361/1024 [MB] (24 MBps) [2024-10-07T12:38:12.471Z] Copying: 386/1024 [MB] (24 MBps) [2024-10-07T12:38:13.409Z] Copying: 410/1024 [MB] (24 MBps) [2024-10-07T12:38:14.346Z] Copying: 432/1024 [MB] (22 MBps) [2024-10-07T12:38:15.284Z] Copying: 456/1024 [MB] (24 MBps) [2024-10-07T12:38:16.222Z] Copying: 480/1024 [MB] (23 MBps) [2024-10-07T12:38:17.160Z] Copying: 505/1024 [MB] (24 MBps) [2024-10-07T12:38:18.100Z] Copying: 528/1024 [MB] (23 MBps) [2024-10-07T12:38:19.478Z] Copying: 552/1024 [MB] (23 MBps) [2024-10-07T12:38:20.415Z] Copying: 575/1024 [MB] (23 MBps) [2024-10-07T12:38:21.352Z] Copying: 599/1024 [MB] (23 MBps) [2024-10-07T12:38:22.289Z] Copying: 622/1024 [MB] (23 MBps) [2024-10-07T12:38:23.262Z] Copying: 646/1024 [MB] (23 MBps) [2024-10-07T12:38:24.225Z] Copying: 670/1024 [MB] (24 MBps) [2024-10-07T12:38:25.163Z] Copying: 695/1024 [MB] (24 MBps) [2024-10-07T12:38:26.111Z] Copying: 719/1024 [MB] (24 MBps) [2024-10-07T12:38:27.497Z] Copying: 744/1024 [MB] (24 MBps) [2024-10-07T12:38:28.435Z] Copying: 769/1024 [MB] (24 MBps) [2024-10-07T12:38:29.372Z] Copying: 792/1024 [MB] (23 MBps) [2024-10-07T12:38:30.308Z] Copying: 817/1024 [MB] (25 MBps) [2024-10-07T12:38:31.243Z] Copying: 842/1024 [MB] (24 MBps) [2024-10-07T12:38:32.180Z] Copying: 866/1024 [MB] (24 MBps) [2024-10-07T12:38:33.117Z] Copying: 890/1024 [MB] (24 MBps) [2024-10-07T12:38:34.497Z] Copying: 915/1024 [MB] (24 MBps) [2024-10-07T12:38:35.434Z] Copying: 939/1024 [MB] (24 MBps) [2024-10-07T12:38:36.371Z] Copying: 963/1024 [MB] (24 MBps) [2024-10-07T12:38:37.310Z] Copying: 987/1024 [MB] (23 MBps) [2024-10-07T12:38:37.568Z] Copying: 1012/1024 [MB] (24 MBps) [2024-10-07T12:38:37.568Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-10-07 12:38:37.538327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.277 [2024-10-07 12:38:37.538381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:14.277 [2024-10-07 12:38:37.538398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:14.277 [2024-10-07 12:38:37.538408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.277 [2024-10-07 12:38:37.538428] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:14.277 [2024-10-07 12:38:37.542568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.277 [2024-10-07 12:38:37.542601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:14.277 [2024-10-07 12:38:37.542613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.130 ms 00:28:14.277 [2024-10-07 12:38:37.542622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.277 [2024-10-07 12:38:37.544524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.277 [2024-10-07 12:38:37.544685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:14.277 [2024-10-07 12:38:37.544708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.883 ms 00:28:14.277 [2024-10-07 12:38:37.544718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.277 [2024-10-07 12:38:37.562117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.277 [2024-10-07 12:38:37.562155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:14.277 [2024-10-07 12:38:37.562175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.404 ms 00:28:14.277 [2024-10-07 12:38:37.562185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.277 [2024-10-07 12:38:37.567027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.277 [2024-10-07 12:38:37.567187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:14.277 [2024-10-07 12:38:37.567206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:28:14.277 [2024-10-07 12:38:37.567216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.536 [2024-10-07 12:38:37.602543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.536 [2024-10-07 12:38:37.602579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:14.536 [2024-10-07 12:38:37.602591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.326 ms 00:28:14.536 [2024-10-07 12:38:37.602617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.536 [2024-10-07 12:38:37.623823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.536 [2024-10-07 12:38:37.623861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:14.536 [2024-10-07 12:38:37.623874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.199 ms 00:28:14.536 [2024-10-07 12:38:37.623891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.536 [2024-10-07 12:38:37.625419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.536 [2024-10-07 12:38:37.625562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:14.536 [2024-10-07 12:38:37.625581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:28:14.536 [2024-10-07 12:38:37.625592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.536 [2024-10-07 12:38:37.662139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.536 [2024-10-07 12:38:37.662176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:14.536 [2024-10-07 12:38:37.662188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.585 ms 00:28:14.536 [2024-10-07 12:38:37.662214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.536 [2024-10-07 12:38:37.696769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.536 [2024-10-07 12:38:37.696824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:14.536 [2024-10-07 12:38:37.696837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.575 ms 00:28:14.536 [2024-10-07 12:38:37.696862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.536 [2024-10-07 12:38:37.731872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.536 [2024-10-07 12:38:37.731926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:14.536 [2024-10-07 12:38:37.731939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.032 ms 00:28:14.536 [2024-10-07 12:38:37.731949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.536 [2024-10-07 12:38:37.766555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.536 [2024-10-07 12:38:37.766712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:14.536 [2024-10-07 12:38:37.766732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.557 ms 00:28:14.536 [2024-10-07 12:38:37.766743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.536 [2024-10-07 12:38:37.766853] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:14.536 [2024-10-07 12:38:37.766874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 1024 / 261120 wr_cnt: 1 state: open 00:28:14.536 [2024-10-07 12:38:37.766887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.766918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.766931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.766942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.766962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.766973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.766984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.766994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:14.536 [2024-10-07 12:38:37.767872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:14.537 [2024-10-07 12:38:37.767883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:14.537 [2024-10-07 12:38:37.767893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:14.537 [2024-10-07 12:38:37.767912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:14.537 [2024-10-07 12:38:37.767923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:14.537 [2024-10-07 12:38:37.767934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:14.537 [2024-10-07 12:38:37.767944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:14.537 [2024-10-07 12:38:37.767961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:14.537 [2024-10-07 12:38:37.767971] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cfa8e74c-67c0-4e91-a7e6-4267d33e5f42 00:28:14.537 [2024-10-07 12:38:37.767982] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 1024 00:28:14.537 [2024-10-07 12:38:37.767991] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1984 00:28:14.537 [2024-10-07 12:38:37.768001] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:28:14.537 [2024-10-07 12:38:37.768011] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.9375 00:28:14.537 [2024-10-07 12:38:37.768020] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:14.537 [2024-10-07 12:38:37.768030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:14.537 [2024-10-07 12:38:37.768050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:14.537 [2024-10-07 12:38:37.768060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:14.537 [2024-10-07 12:38:37.768068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:14.537 [2024-10-07 12:38:37.768077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.537 [2024-10-07 12:38:37.768087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:14.537 [2024-10-07 12:38:37.768101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:28:14.537 [2024-10-07 12:38:37.768111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.537 [2024-10-07 12:38:37.787709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.537 [2024-10-07 12:38:37.787741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:14.537 [2024-10-07 12:38:37.787753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.596 ms 00:28:14.537 [2024-10-07 12:38:37.787779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.537 [2024-10-07 12:38:37.788461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.537 [2024-10-07 12:38:37.788559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:14.537 [2024-10-07 12:38:37.788646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:28:14.537 [2024-10-07 12:38:37.788661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.796 [2024-10-07 12:38:37.832484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.796 [2024-10-07 12:38:37.832520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:14.796 [2024-10-07 12:38:37.832532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.796 [2024-10-07 12:38:37.832558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.796 [2024-10-07 12:38:37.832612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.796 [2024-10-07 12:38:37.832622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:14.796 [2024-10-07 12:38:37.832632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.796 [2024-10-07 12:38:37.832642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.796 [2024-10-07 12:38:37.832715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.796 [2024-10-07 12:38:37.832728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:14.797 [2024-10-07 12:38:37.832739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.797 [2024-10-07 12:38:37.832748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.797 [2024-10-07 12:38:37.832765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.797 [2024-10-07 12:38:37.832779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:14.797 [2024-10-07 12:38:37.832789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.797 [2024-10-07 12:38:37.832799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.797 [2024-10-07 12:38:37.952919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.797 [2024-10-07 12:38:37.952974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.797 [2024-10-07 12:38:37.952990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.797 [2024-10-07 12:38:37.953000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.797 [2024-10-07 12:38:38.048931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.797 [2024-10-07 12:38:38.048999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.797 [2024-10-07 12:38:38.049014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.797 [2024-10-07 12:38:38.049025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.797 [2024-10-07 12:38:38.049110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.797 [2024-10-07 12:38:38.049123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:14.797 [2024-10-07 12:38:38.049133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.797 [2024-10-07 12:38:38.049143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.797 [2024-10-07 12:38:38.049180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.797 [2024-10-07 12:38:38.049192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:14.797 [2024-10-07 12:38:38.049208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.797 [2024-10-07 12:38:38.049218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.797 [2024-10-07 12:38:38.049348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.797 [2024-10-07 12:38:38.049362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:14.797 [2024-10-07 12:38:38.049373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.797 [2024-10-07 12:38:38.049383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.797 [2024-10-07 12:38:38.049416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.797 [2024-10-07 12:38:38.049429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:14.797 [2024-10-07 12:38:38.049439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.797 [2024-10-07 12:38:38.049453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.797 [2024-10-07 12:38:38.049489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.797 [2024-10-07 12:38:38.049501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:14.797 [2024-10-07 12:38:38.049511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.797 [2024-10-07 12:38:38.049521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.797 [2024-10-07 12:38:38.049560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.797 [2024-10-07 12:38:38.049572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:14.797 [2024-10-07 12:38:38.049586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.797 [2024-10-07 12:38:38.049595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.797 [2024-10-07 12:38:38.049719] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 512.181 ms, result 0 00:28:16.174 00:28:16.175 00:28:16.175 12:38:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:18.114 12:38:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:18.114 [2024-10-07 12:38:41.086845] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:28:18.114 [2024-10-07 12:38:41.086989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80378 ] 00:28:18.114 [2024-10-07 12:38:41.259621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.382 [2024-10-07 12:38:41.455041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.642 [2024-10-07 12:38:41.800217] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:18.642 [2024-10-07 12:38:41.800280] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:18.903 [2024-10-07 12:38:41.961404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.961452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:18.903 [2024-10-07 12:38:41.961467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:18.903 [2024-10-07 12:38:41.961481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.961529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.961541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:18.903 [2024-10-07 12:38:41.961552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:18.903 [2024-10-07 12:38:41.961562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.961582] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:18.903 [2024-10-07 12:38:41.962480] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:18.903 [2024-10-07 12:38:41.962507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.962518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:18.903 [2024-10-07 12:38:41.962528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:28:18.903 [2024-10-07 12:38:41.962538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.964030] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:18.903 [2024-10-07 12:38:41.982563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.982600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:18.903 [2024-10-07 12:38:41.982613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.563 ms 00:28:18.903 [2024-10-07 12:38:41.982623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.982679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.982691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:18.903 [2024-10-07 12:38:41.982701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:18.903 [2024-10-07 12:38:41.982710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.989678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.989708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:18.903 [2024-10-07 12:38:41.989719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.910 ms 00:28:18.903 [2024-10-07 12:38:41.989728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.989805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.989818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:18.903 [2024-10-07 12:38:41.989828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:18.903 [2024-10-07 12:38:41.989837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.989879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.989890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:18.903 [2024-10-07 12:38:41.989915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:18.903 [2024-10-07 12:38:41.989925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.989948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:18.903 [2024-10-07 12:38:41.994679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.994708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:18.903 [2024-10-07 12:38:41.994719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.744 ms 00:28:18.903 [2024-10-07 12:38:41.994728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.994757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.994767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:18.903 [2024-10-07 12:38:41.994776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:18.903 [2024-10-07 12:38:41.994785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.994837] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:18.903 [2024-10-07 12:38:41.994858] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:18.903 [2024-10-07 12:38:41.994891] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:18.903 [2024-10-07 12:38:41.994921] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:18.903 [2024-10-07 12:38:41.995031] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:18.903 [2024-10-07 12:38:41.995044] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:18.903 [2024-10-07 12:38:41.995057] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:18.903 [2024-10-07 12:38:41.995090] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:18.903 [2024-10-07 12:38:41.995102] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:18.903 [2024-10-07 12:38:41.995114] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:18.903 [2024-10-07 12:38:41.995124] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:18.903 [2024-10-07 12:38:41.995134] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:18.903 [2024-10-07 12:38:41.995143] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:18.903 [2024-10-07 12:38:41.995154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.995164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:18.903 [2024-10-07 12:38:41.995174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:28:18.903 [2024-10-07 12:38:41.995184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.995255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.903 [2024-10-07 12:38:41.995269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:18.903 [2024-10-07 12:38:41.995279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:18.903 [2024-10-07 12:38:41.995289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.903 [2024-10-07 12:38:41.995382] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:18.903 [2024-10-07 12:38:41.995397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:18.903 [2024-10-07 12:38:41.995407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:18.903 [2024-10-07 12:38:41.995417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.903 [2024-10-07 12:38:41.995427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:18.903 [2024-10-07 12:38:41.995436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:18.904 [2024-10-07 12:38:41.995456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:18.904 [2024-10-07 12:38:41.995466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:18.904 [2024-10-07 12:38:41.995485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:18.904 [2024-10-07 12:38:41.995494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:18.904 [2024-10-07 12:38:41.995503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:18.904 [2024-10-07 12:38:41.995522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:18.904 [2024-10-07 12:38:41.995531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:18.904 [2024-10-07 12:38:41.995541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:18.904 [2024-10-07 12:38:41.995559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:18.904 [2024-10-07 12:38:41.995568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:18.904 [2024-10-07 12:38:41.995586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.904 [2024-10-07 12:38:41.995605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:18.904 [2024-10-07 12:38:41.995614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.904 [2024-10-07 12:38:41.995631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:18.904 [2024-10-07 12:38:41.995640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.904 [2024-10-07 12:38:41.995658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:18.904 [2024-10-07 12:38:41.995666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.904 [2024-10-07 12:38:41.995684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:18.904 [2024-10-07 12:38:41.995693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:18.904 [2024-10-07 12:38:41.995710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:18.904 [2024-10-07 12:38:41.995719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:18.904 [2024-10-07 12:38:41.995728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:18.904 [2024-10-07 12:38:41.995737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:18.904 [2024-10-07 12:38:41.995746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:18.904 [2024-10-07 12:38:41.995754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:18.904 [2024-10-07 12:38:41.995776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:18.904 [2024-10-07 12:38:41.995785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995793] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:18.904 [2024-10-07 12:38:41.995803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:18.904 [2024-10-07 12:38:41.995816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:18.904 [2024-10-07 12:38:41.995826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.904 [2024-10-07 12:38:41.995836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:18.904 [2024-10-07 12:38:41.995845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:18.904 [2024-10-07 12:38:41.995855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:18.904 [2024-10-07 12:38:41.995864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:18.904 [2024-10-07 12:38:41.995873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:18.904 [2024-10-07 12:38:41.995882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:18.904 [2024-10-07 12:38:41.995893] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:18.904 [2024-10-07 12:38:41.995905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:18.904 [2024-10-07 12:38:41.995927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:18.904 [2024-10-07 12:38:41.995938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:18.904 [2024-10-07 12:38:41.995948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:18.904 [2024-10-07 12:38:41.995959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:18.904 [2024-10-07 12:38:41.995969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:18.904 [2024-10-07 12:38:41.995979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:18.904 [2024-10-07 12:38:41.995989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:18.904 [2024-10-07 12:38:41.995999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:18.904 [2024-10-07 12:38:41.996009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:18.904 [2024-10-07 12:38:41.996019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:18.904 [2024-10-07 12:38:41.996029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:18.904 [2024-10-07 12:38:41.996039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:18.904 [2024-10-07 12:38:41.996049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:18.904 [2024-10-07 12:38:41.996059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:18.904 [2024-10-07 12:38:41.996069] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:18.904 [2024-10-07 12:38:41.996080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:18.904 [2024-10-07 12:38:41.996091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:18.904 [2024-10-07 12:38:41.996103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:18.904 [2024-10-07 12:38:41.996113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:18.904 [2024-10-07 12:38:41.996123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:18.904 [2024-10-07 12:38:41.996134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.904 [2024-10-07 12:38:41.996144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:18.904 [2024-10-07 12:38:41.996154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:28:18.904 [2024-10-07 12:38:41.996164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.904 [2024-10-07 12:38:42.044225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.904 [2024-10-07 12:38:42.044257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:18.904 [2024-10-07 12:38:42.044270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.090 ms 00:28:18.904 [2024-10-07 12:38:42.044281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.904 [2024-10-07 12:38:42.044355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.904 [2024-10-07 12:38:42.044366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:18.904 [2024-10-07 12:38:42.044375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:18.904 [2024-10-07 12:38:42.044384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.904 [2024-10-07 12:38:42.091624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.904 [2024-10-07 12:38:42.091664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:18.904 [2024-10-07 12:38:42.091683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.265 ms 00:28:18.904 [2024-10-07 12:38:42.091695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.904 [2024-10-07 12:38:42.091730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.904 [2024-10-07 12:38:42.091741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:18.904 [2024-10-07 12:38:42.091753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:18.904 [2024-10-07 12:38:42.091764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.904 [2024-10-07 12:38:42.092279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.904 [2024-10-07 12:38:42.092295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:18.904 [2024-10-07 12:38:42.092306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:28:18.904 [2024-10-07 12:38:42.092323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.904 [2024-10-07 12:38:42.092467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.904 [2024-10-07 12:38:42.092485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:18.904 [2024-10-07 12:38:42.092496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:28:18.904 [2024-10-07 12:38:42.092506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.904 [2024-10-07 12:38:42.110148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.904 [2024-10-07 12:38:42.110180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:18.904 [2024-10-07 12:38:42.110193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.650 ms 00:28:18.904 [2024-10-07 12:38:42.110202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.904 [2024-10-07 12:38:42.128532] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:28:18.904 [2024-10-07 12:38:42.128694] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:18.904 [2024-10-07 12:38:42.128713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.904 [2024-10-07 12:38:42.128724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:18.904 [2024-10-07 12:38:42.128735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.445 ms 00:28:18.904 [2024-10-07 12:38:42.128744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.905 [2024-10-07 12:38:42.156289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.905 [2024-10-07 12:38:42.156326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:18.905 [2024-10-07 12:38:42.156339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.553 ms 00:28:18.905 [2024-10-07 12:38:42.156349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.905 [2024-10-07 12:38:42.173258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.905 [2024-10-07 12:38:42.173290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:18.905 [2024-10-07 12:38:42.173302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.897 ms 00:28:18.905 [2024-10-07 12:38:42.173311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.905 [2024-10-07 12:38:42.190398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.905 [2024-10-07 12:38:42.190529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:18.905 [2024-10-07 12:38:42.190564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.078 ms 00:28:18.905 [2024-10-07 12:38:42.190574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.905 [2024-10-07 12:38:42.191332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.905 [2024-10-07 12:38:42.191353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:18.905 [2024-10-07 12:38:42.191364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:28:18.905 [2024-10-07 12:38:42.191374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.164 [2024-10-07 12:38:42.274096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.164 [2024-10-07 12:38:42.274153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:19.164 [2024-10-07 12:38:42.274168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.834 ms 00:28:19.164 [2024-10-07 12:38:42.274178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.164 [2024-10-07 12:38:42.284651] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:19.164 [2024-10-07 12:38:42.287390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.164 [2024-10-07 12:38:42.287421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:19.164 [2024-10-07 12:38:42.287435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.190 ms 00:28:19.164 [2024-10-07 12:38:42.287450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.164 [2024-10-07 12:38:42.287534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.164 [2024-10-07 12:38:42.287546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:19.164 [2024-10-07 12:38:42.287556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:19.164 [2024-10-07 12:38:42.287566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.164 [2024-10-07 12:38:42.288548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.164 [2024-10-07 12:38:42.288641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:19.164 [2024-10-07 12:38:42.288743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:28:19.164 [2024-10-07 12:38:42.288780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.164 [2024-10-07 12:38:42.288834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.164 [2024-10-07 12:38:42.288920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:19.164 [2024-10-07 12:38:42.288958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:19.164 [2024-10-07 12:38:42.288987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.164 [2024-10-07 12:38:42.289085] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:19.164 [2024-10-07 12:38:42.289103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.164 [2024-10-07 12:38:42.289113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:19.164 [2024-10-07 12:38:42.289128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:19.164 [2024-10-07 12:38:42.289138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.164 [2024-10-07 12:38:42.324223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.164 [2024-10-07 12:38:42.324257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:19.164 [2024-10-07 12:38:42.324270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.119 ms 00:28:19.164 [2024-10-07 12:38:42.324279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.164 [2024-10-07 12:38:42.324348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.164 [2024-10-07 12:38:42.324359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:19.164 [2024-10-07 12:38:42.324369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:19.164 [2024-10-07 12:38:42.324379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.164 [2024-10-07 12:38:42.325730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 364.445 ms, result 0 00:28:20.543  [2024-10-07T12:38:44.772Z] Copying: 1220/1048576 [kB] (1220 kBps) [2024-10-07T12:38:45.710Z] Copying: 2996/1048576 [kB] (1776 kBps) [2024-10-07T12:38:46.647Z] Copying: 19/1024 [MB] (16 MBps) [2024-10-07T12:38:47.585Z] Copying: 52/1024 [MB] (32 MBps) [2024-10-07T12:38:48.521Z] Copying: 85/1024 [MB] (33 MBps) [2024-10-07T12:38:49.899Z] Copying: 118/1024 [MB] (33 MBps) [2024-10-07T12:38:50.837Z] Copying: 151/1024 [MB] (32 MBps) [2024-10-07T12:38:51.774Z] Copying: 183/1024 [MB] (31 MBps) [2024-10-07T12:38:52.712Z] Copying: 215/1024 [MB] (32 MBps) [2024-10-07T12:38:53.649Z] Copying: 247/1024 [MB] (31 MBps) [2024-10-07T12:38:54.586Z] Copying: 279/1024 [MB] (31 MBps) [2024-10-07T12:38:55.524Z] Copying: 311/1024 [MB] (32 MBps) [2024-10-07T12:38:56.903Z] Copying: 343/1024 [MB] (32 MBps) [2024-10-07T12:38:57.839Z] Copying: 376/1024 [MB] (32 MBps) [2024-10-07T12:38:58.777Z] Copying: 409/1024 [MB] (33 MBps) [2024-10-07T12:38:59.713Z] Copying: 442/1024 [MB] (32 MBps) [2024-10-07T12:39:00.650Z] Copying: 476/1024 [MB] (33 MBps) [2024-10-07T12:39:01.586Z] Copying: 509/1024 [MB] (33 MBps) [2024-10-07T12:39:02.523Z] Copying: 543/1024 [MB] (34 MBps) [2024-10-07T12:39:03.901Z] Copying: 577/1024 [MB] (33 MBps) [2024-10-07T12:39:04.868Z] Copying: 611/1024 [MB] (33 MBps) [2024-10-07T12:39:05.807Z] Copying: 644/1024 [MB] (33 MBps) [2024-10-07T12:39:06.745Z] Copying: 678/1024 [MB] (34 MBps) [2024-10-07T12:39:07.682Z] Copying: 712/1024 [MB] (33 MBps) [2024-10-07T12:39:08.619Z] Copying: 746/1024 [MB] (33 MBps) [2024-10-07T12:39:09.556Z] Copying: 780/1024 [MB] (34 MBps) [2024-10-07T12:39:10.494Z] Copying: 815/1024 [MB] (34 MBps) [2024-10-07T12:39:11.895Z] Copying: 848/1024 [MB] (33 MBps) [2024-10-07T12:39:12.832Z] Copying: 881/1024 [MB] (32 MBps) [2024-10-07T12:39:13.769Z] Copying: 914/1024 [MB] (32 MBps) [2024-10-07T12:39:14.707Z] Copying: 946/1024 [MB] (32 MBps) [2024-10-07T12:39:15.645Z] Copying: 979/1024 [MB] (32 MBps) [2024-10-07T12:39:15.904Z] Copying: 1012/1024 [MB] (32 MBps) [2024-10-07T12:39:16.472Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-10-07 12:39:16.370010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.181 [2024-10-07 12:39:16.370174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:53.181 [2024-10-07 12:39:16.370198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:53.181 [2024-10-07 12:39:16.370215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.181 [2024-10-07 12:39:16.370249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:53.181 [2024-10-07 12:39:16.374985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.181 [2024-10-07 12:39:16.375026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:53.181 [2024-10-07 12:39:16.375039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:28:53.181 [2024-10-07 12:39:16.375050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.181 [2024-10-07 12:39:16.375354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.181 [2024-10-07 12:39:16.375370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:53.181 [2024-10-07 12:39:16.375383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:28:53.181 [2024-10-07 12:39:16.375393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.181 [2024-10-07 12:39:16.391956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.181 [2024-10-07 12:39:16.392215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:53.181 [2024-10-07 12:39:16.392303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.567 ms 00:28:53.181 [2024-10-07 12:39:16.392348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.181 [2024-10-07 12:39:16.397670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.181 [2024-10-07 12:39:16.397801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:53.181 [2024-10-07 12:39:16.397958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.269 ms 00:28:53.181 [2024-10-07 12:39:16.397996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.181 [2024-10-07 12:39:16.434154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.181 [2024-10-07 12:39:16.434287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:53.181 [2024-10-07 12:39:16.434433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.145 ms 00:28:53.181 [2024-10-07 12:39:16.434449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.181 [2024-10-07 12:39:16.454556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.181 [2024-10-07 12:39:16.454586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:53.181 [2024-10-07 12:39:16.454599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.052 ms 00:28:53.181 [2024-10-07 12:39:16.454609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.181 [2024-10-07 12:39:16.456662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.181 [2024-10-07 12:39:16.456797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:53.181 [2024-10-07 12:39:16.456818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.010 ms 00:28:53.181 [2024-10-07 12:39:16.456828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.442 [2024-10-07 12:39:16.492547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.442 [2024-10-07 12:39:16.492577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:53.442 [2024-10-07 12:39:16.492591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.754 ms 00:28:53.442 [2024-10-07 12:39:16.492601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.442 [2024-10-07 12:39:16.529046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.442 [2024-10-07 12:39:16.529076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:53.442 [2024-10-07 12:39:16.529088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.467 ms 00:28:53.442 [2024-10-07 12:39:16.529098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.442 [2024-10-07 12:39:16.565692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.442 [2024-10-07 12:39:16.565828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:53.442 [2024-10-07 12:39:16.565849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.615 ms 00:28:53.442 [2024-10-07 12:39:16.565859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.442 [2024-10-07 12:39:16.602303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.442 [2024-10-07 12:39:16.602332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:53.442 [2024-10-07 12:39:16.602344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.366 ms 00:28:53.442 [2024-10-07 12:39:16.602354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.442 [2024-10-07 12:39:16.602390] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:53.442 [2024-10-07 12:39:16.602405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:53.442 [2024-10-07 12:39:16.602423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:53.442 [2024-10-07 12:39:16.602434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:53.442 [2024-10-07 12:39:16.602808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.602993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:53.443 [2024-10-07 12:39:16.603533] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:53.443 [2024-10-07 12:39:16.603543] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cfa8e74c-67c0-4e91-a7e6-4267d33e5f42 00:28:53.443 [2024-10-07 12:39:16.603553] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:53.443 [2024-10-07 12:39:16.603563] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263616 00:28:53.443 [2024-10-07 12:39:16.603572] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261632 00:28:53.443 [2024-10-07 12:39:16.603583] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:28:53.443 [2024-10-07 12:39:16.603592] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:53.443 [2024-10-07 12:39:16.603602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:53.443 [2024-10-07 12:39:16.603613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:53.443 [2024-10-07 12:39:16.603628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:53.443 [2024-10-07 12:39:16.603637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:53.443 [2024-10-07 12:39:16.603646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.443 [2024-10-07 12:39:16.603657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:53.443 [2024-10-07 12:39:16.603679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:28:53.443 [2024-10-07 12:39:16.603692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.443 [2024-10-07 12:39:16.623372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.443 [2024-10-07 12:39:16.623398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:53.443 [2024-10-07 12:39:16.623410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.675 ms 00:28:53.443 [2024-10-07 12:39:16.623420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.443 [2024-10-07 12:39:16.624029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.443 [2024-10-07 12:39:16.624042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:53.443 [2024-10-07 12:39:16.624053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:28:53.443 [2024-10-07 12:39:16.624063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.443 [2024-10-07 12:39:16.667958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.443 [2024-10-07 12:39:16.668111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:53.443 [2024-10-07 12:39:16.668133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.443 [2024-10-07 12:39:16.668144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.443 [2024-10-07 12:39:16.668199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.443 [2024-10-07 12:39:16.668217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:53.443 [2024-10-07 12:39:16.668227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.443 [2024-10-07 12:39:16.668238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.443 [2024-10-07 12:39:16.668301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.443 [2024-10-07 12:39:16.668314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:53.443 [2024-10-07 12:39:16.668325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.443 [2024-10-07 12:39:16.668335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.443 [2024-10-07 12:39:16.668351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.443 [2024-10-07 12:39:16.668362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:53.443 [2024-10-07 12:39:16.668377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.443 [2024-10-07 12:39:16.668387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.703 [2024-10-07 12:39:16.790761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.703 [2024-10-07 12:39:16.790803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:53.703 [2024-10-07 12:39:16.790820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.703 [2024-10-07 12:39:16.790830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.703 [2024-10-07 12:39:16.889791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.703 [2024-10-07 12:39:16.889835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:53.703 [2024-10-07 12:39:16.889849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.703 [2024-10-07 12:39:16.889859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.703 [2024-10-07 12:39:16.889954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.703 [2024-10-07 12:39:16.889967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:53.703 [2024-10-07 12:39:16.889978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.703 [2024-10-07 12:39:16.889987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.703 [2024-10-07 12:39:16.890024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.703 [2024-10-07 12:39:16.890036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:53.703 [2024-10-07 12:39:16.890046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.703 [2024-10-07 12:39:16.890060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.703 [2024-10-07 12:39:16.890165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.703 [2024-10-07 12:39:16.890178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:53.703 [2024-10-07 12:39:16.890189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.703 [2024-10-07 12:39:16.890199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.703 [2024-10-07 12:39:16.890233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.703 [2024-10-07 12:39:16.890245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:53.703 [2024-10-07 12:39:16.890255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.703 [2024-10-07 12:39:16.890265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.703 [2024-10-07 12:39:16.890305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.703 [2024-10-07 12:39:16.890316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:53.703 [2024-10-07 12:39:16.890326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.703 [2024-10-07 12:39:16.890336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.703 [2024-10-07 12:39:16.890376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:53.703 [2024-10-07 12:39:16.890388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:53.703 [2024-10-07 12:39:16.890398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:53.703 [2024-10-07 12:39:16.890411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.703 [2024-10-07 12:39:16.890525] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.339 ms, result 0 00:28:55.081 00:28:55.081 00:28:55.081 12:39:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:56.459 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:56.459 12:39:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:56.459 [2024-10-07 12:39:19.716534] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:28:56.459 [2024-10-07 12:39:19.716645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80769 ] 00:28:56.718 [2024-10-07 12:39:19.884706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.977 [2024-10-07 12:39:20.081675] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.236 [2024-10-07 12:39:20.430506] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:57.236 [2024-10-07 12:39:20.430574] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:57.496 [2024-10-07 12:39:20.591015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.496 [2024-10-07 12:39:20.591062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:57.496 [2024-10-07 12:39:20.591077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:57.496 [2024-10-07 12:39:20.591092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.496 [2024-10-07 12:39:20.591139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.496 [2024-10-07 12:39:20.591151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:57.496 [2024-10-07 12:39:20.591161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:57.496 [2024-10-07 12:39:20.591171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.496 [2024-10-07 12:39:20.591192] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:57.496 [2024-10-07 12:39:20.592160] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:57.496 [2024-10-07 12:39:20.592195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.496 [2024-10-07 12:39:20.592208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:57.496 [2024-10-07 12:39:20.592219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:28:57.496 [2024-10-07 12:39:20.592229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.496 [2024-10-07 12:39:20.593658] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:57.496 [2024-10-07 12:39:20.611604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.496 [2024-10-07 12:39:20.611640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:57.496 [2024-10-07 12:39:20.611654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.976 ms 00:28:57.496 [2024-10-07 12:39:20.611663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.496 [2024-10-07 12:39:20.611719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.496 [2024-10-07 12:39:20.611731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:57.496 [2024-10-07 12:39:20.611742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:57.496 [2024-10-07 12:39:20.611752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.496 [2024-10-07 12:39:20.618562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.496 [2024-10-07 12:39:20.618703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:57.496 [2024-10-07 12:39:20.618739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.755 ms 00:28:57.496 [2024-10-07 12:39:20.618749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.496 [2024-10-07 12:39:20.618832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.496 [2024-10-07 12:39:20.618844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:57.496 [2024-10-07 12:39:20.618854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:57.496 [2024-10-07 12:39:20.618863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.496 [2024-10-07 12:39:20.618907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.496 [2024-10-07 12:39:20.618937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:57.496 [2024-10-07 12:39:20.618948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:57.496 [2024-10-07 12:39:20.618981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.496 [2024-10-07 12:39:20.619005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:57.496 [2024-10-07 12:39:20.623721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.496 [2024-10-07 12:39:20.623751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:57.496 [2024-10-07 12:39:20.623764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.729 ms 00:28:57.496 [2024-10-07 12:39:20.623773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.496 [2024-10-07 12:39:20.623800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.496 [2024-10-07 12:39:20.623811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:57.496 [2024-10-07 12:39:20.623820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:57.496 [2024-10-07 12:39:20.623830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.496 [2024-10-07 12:39:20.623884] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:57.496 [2024-10-07 12:39:20.623918] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:57.496 [2024-10-07 12:39:20.623969] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:57.496 [2024-10-07 12:39:20.623986] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:57.496 [2024-10-07 12:39:20.624073] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:57.496 [2024-10-07 12:39:20.624086] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:57.496 [2024-10-07 12:39:20.624099] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:57.496 [2024-10-07 12:39:20.624115] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:57.496 [2024-10-07 12:39:20.624128] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:57.497 [2024-10-07 12:39:20.624139] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:57.497 [2024-10-07 12:39:20.624149] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:57.497 [2024-10-07 12:39:20.624158] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:57.497 [2024-10-07 12:39:20.624177] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:57.497 [2024-10-07 12:39:20.624187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.497 [2024-10-07 12:39:20.624198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:57.497 [2024-10-07 12:39:20.624207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:28:57.497 [2024-10-07 12:39:20.624217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.497 [2024-10-07 12:39:20.624287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.497 [2024-10-07 12:39:20.624301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:57.497 [2024-10-07 12:39:20.624312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:57.497 [2024-10-07 12:39:20.624321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.497 [2024-10-07 12:39:20.624429] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:57.497 [2024-10-07 12:39:20.624443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:57.497 [2024-10-07 12:39:20.624454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:57.497 [2024-10-07 12:39:20.624464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:57.497 [2024-10-07 12:39:20.624484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:57.497 [2024-10-07 12:39:20.624502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:57.497 [2024-10-07 12:39:20.624511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:57.497 [2024-10-07 12:39:20.624531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:57.497 [2024-10-07 12:39:20.624541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:57.497 [2024-10-07 12:39:20.624549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:57.497 [2024-10-07 12:39:20.624568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:57.497 [2024-10-07 12:39:20.624578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:57.497 [2024-10-07 12:39:20.624587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:57.497 [2024-10-07 12:39:20.624605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:57.497 [2024-10-07 12:39:20.624614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:57.497 [2024-10-07 12:39:20.624632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.497 [2024-10-07 12:39:20.624650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:57.497 [2024-10-07 12:39:20.624660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.497 [2024-10-07 12:39:20.624677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:57.497 [2024-10-07 12:39:20.624687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.497 [2024-10-07 12:39:20.624705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:57.497 [2024-10-07 12:39:20.624715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.497 [2024-10-07 12:39:20.624733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:57.497 [2024-10-07 12:39:20.624742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:57.497 [2024-10-07 12:39:20.624760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:57.497 [2024-10-07 12:39:20.624769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:57.497 [2024-10-07 12:39:20.624777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:57.497 [2024-10-07 12:39:20.624786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:57.497 [2024-10-07 12:39:20.624795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:57.497 [2024-10-07 12:39:20.624804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:57.497 [2024-10-07 12:39:20.624822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:57.497 [2024-10-07 12:39:20.624831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624840] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:57.497 [2024-10-07 12:39:20.624850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:57.497 [2024-10-07 12:39:20.624864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:57.497 [2024-10-07 12:39:20.624874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.497 [2024-10-07 12:39:20.624883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:57.497 [2024-10-07 12:39:20.624892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:57.497 [2024-10-07 12:39:20.624901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:57.497 [2024-10-07 12:39:20.624910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:57.497 [2024-10-07 12:39:20.624930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:57.497 [2024-10-07 12:39:20.624940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:57.497 [2024-10-07 12:39:20.624950] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:57.497 [2024-10-07 12:39:20.624962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:57.497 [2024-10-07 12:39:20.624974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:57.497 [2024-10-07 12:39:20.624984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:57.497 [2024-10-07 12:39:20.624994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:57.497 [2024-10-07 12:39:20.625004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:57.497 [2024-10-07 12:39:20.625015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:57.497 [2024-10-07 12:39:20.625025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:57.497 [2024-10-07 12:39:20.625035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:57.497 [2024-10-07 12:39:20.625045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:57.497 [2024-10-07 12:39:20.625055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:57.497 [2024-10-07 12:39:20.625065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:57.497 [2024-10-07 12:39:20.625075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:57.497 [2024-10-07 12:39:20.625085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:57.497 [2024-10-07 12:39:20.625095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:57.497 [2024-10-07 12:39:20.625105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:57.497 [2024-10-07 12:39:20.625115] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:57.497 [2024-10-07 12:39:20.625126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:57.497 [2024-10-07 12:39:20.625147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:57.497 [2024-10-07 12:39:20.625157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:57.497 [2024-10-07 12:39:20.625167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:57.497 [2024-10-07 12:39:20.625178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:57.497 [2024-10-07 12:39:20.625189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.497 [2024-10-07 12:39:20.625199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:57.497 [2024-10-07 12:39:20.625209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:28:57.497 [2024-10-07 12:39:20.625218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.497 [2024-10-07 12:39:20.689003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.497 [2024-10-07 12:39:20.689049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:57.497 [2024-10-07 12:39:20.689068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.842 ms 00:28:57.497 [2024-10-07 12:39:20.689083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.497 [2024-10-07 12:39:20.689190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.497 [2024-10-07 12:39:20.689206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:57.497 [2024-10-07 12:39:20.689220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:57.497 [2024-10-07 12:39:20.689233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.497 [2024-10-07 12:39:20.730350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.497 [2024-10-07 12:39:20.730386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:57.497 [2024-10-07 12:39:20.730399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.092 ms 00:28:57.497 [2024-10-07 12:39:20.730414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.497 [2024-10-07 12:39:20.730448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.497 [2024-10-07 12:39:20.730458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:57.497 [2024-10-07 12:39:20.730468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:57.497 [2024-10-07 12:39:20.730478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.498 [2024-10-07 12:39:20.730979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.498 [2024-10-07 12:39:20.730994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:57.498 [2024-10-07 12:39:20.731005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:28:57.498 [2024-10-07 12:39:20.731038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.498 [2024-10-07 12:39:20.731148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.498 [2024-10-07 12:39:20.731161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:57.498 [2024-10-07 12:39:20.731171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:28:57.498 [2024-10-07 12:39:20.731181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.498 [2024-10-07 12:39:20.749194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.498 [2024-10-07 12:39:20.749342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:57.498 [2024-10-07 12:39:20.749363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.022 ms 00:28:57.498 [2024-10-07 12:39:20.749373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.498 [2024-10-07 12:39:20.767599] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:57.498 [2024-10-07 12:39:20.767759] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:57.498 [2024-10-07 12:39:20.767778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.498 [2024-10-07 12:39:20.767789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:57.498 [2024-10-07 12:39:20.767800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.330 ms 00:28:57.498 [2024-10-07 12:39:20.767810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.757 [2024-10-07 12:39:20.795648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.757 [2024-10-07 12:39:20.795687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:57.757 [2024-10-07 12:39:20.795700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.793 ms 00:28:57.757 [2024-10-07 12:39:20.795711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.757 [2024-10-07 12:39:20.812819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.757 [2024-10-07 12:39:20.812855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:57.757 [2024-10-07 12:39:20.812868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.096 ms 00:28:57.757 [2024-10-07 12:39:20.812877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.757 [2024-10-07 12:39:20.830262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.757 [2024-10-07 12:39:20.830387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:57.757 [2024-10-07 12:39:20.830421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.332 ms 00:28:57.757 [2024-10-07 12:39:20.830431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.757 [2024-10-07 12:39:20.831235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.757 [2024-10-07 12:39:20.831255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:57.757 [2024-10-07 12:39:20.831266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:28:57.757 [2024-10-07 12:39:20.831276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.757 [2024-10-07 12:39:20.913182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.757 [2024-10-07 12:39:20.913238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:57.757 [2024-10-07 12:39:20.913254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.017 ms 00:28:57.757 [2024-10-07 12:39:20.913265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.757 [2024-10-07 12:39:20.923392] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:57.757 [2024-10-07 12:39:20.925828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.757 [2024-10-07 12:39:20.925856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:57.757 [2024-10-07 12:39:20.925868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.543 ms 00:28:57.757 [2024-10-07 12:39:20.925899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.757 [2024-10-07 12:39:20.926000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.757 [2024-10-07 12:39:20.926014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:57.757 [2024-10-07 12:39:20.926036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:57.757 [2024-10-07 12:39:20.926045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.757 [2024-10-07 12:39:20.926886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.757 [2024-10-07 12:39:20.926913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:57.757 [2024-10-07 12:39:20.926924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:28:57.758 [2024-10-07 12:39:20.926933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.758 [2024-10-07 12:39:20.926966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.758 [2024-10-07 12:39:20.926977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:57.758 [2024-10-07 12:39:20.926987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:57.758 [2024-10-07 12:39:20.926996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.758 [2024-10-07 12:39:20.927039] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:57.758 [2024-10-07 12:39:20.927051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.758 [2024-10-07 12:39:20.927061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:57.758 [2024-10-07 12:39:20.927074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:57.758 [2024-10-07 12:39:20.927083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.758 [2024-10-07 12:39:20.961577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.758 [2024-10-07 12:39:20.961614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:57.758 [2024-10-07 12:39:20.961628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.516 ms 00:28:57.758 [2024-10-07 12:39:20.961638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.758 [2024-10-07 12:39:20.961709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.758 [2024-10-07 12:39:20.961721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:57.758 [2024-10-07 12:39:20.961732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:57.758 [2024-10-07 12:39:20.961741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.758 [2024-10-07 12:39:20.962936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.070 ms, result 0 00:28:59.136  [2024-10-07T12:39:23.415Z] Copying: 25/1024 [MB] (25 MBps) [2024-10-07T12:39:24.353Z] Copying: 50/1024 [MB] (25 MBps) [2024-10-07T12:39:25.317Z] Copying: 76/1024 [MB] (25 MBps) [2024-10-07T12:39:26.255Z] Copying: 102/1024 [MB] (25 MBps) [2024-10-07T12:39:27.193Z] Copying: 127/1024 [MB] (25 MBps) [2024-10-07T12:39:28.572Z] Copying: 152/1024 [MB] (25 MBps) [2024-10-07T12:39:29.509Z] Copying: 178/1024 [MB] (25 MBps) [2024-10-07T12:39:30.447Z] Copying: 203/1024 [MB] (25 MBps) [2024-10-07T12:39:31.385Z] Copying: 229/1024 [MB] (25 MBps) [2024-10-07T12:39:32.322Z] Copying: 255/1024 [MB] (26 MBps) [2024-10-07T12:39:33.261Z] Copying: 281/1024 [MB] (25 MBps) [2024-10-07T12:39:34.201Z] Copying: 306/1024 [MB] (25 MBps) [2024-10-07T12:39:35.579Z] Copying: 332/1024 [MB] (25 MBps) [2024-10-07T12:39:36.147Z] Copying: 358/1024 [MB] (25 MBps) [2024-10-07T12:39:37.526Z] Copying: 383/1024 [MB] (25 MBps) [2024-10-07T12:39:38.464Z] Copying: 408/1024 [MB] (25 MBps) [2024-10-07T12:39:39.402Z] Copying: 434/1024 [MB] (25 MBps) [2024-10-07T12:39:40.341Z] Copying: 459/1024 [MB] (25 MBps) [2024-10-07T12:39:41.279Z] Copying: 485/1024 [MB] (25 MBps) [2024-10-07T12:39:42.216Z] Copying: 509/1024 [MB] (24 MBps) [2024-10-07T12:39:43.154Z] Copying: 535/1024 [MB] (25 MBps) [2024-10-07T12:39:44.534Z] Copying: 560/1024 [MB] (25 MBps) [2024-10-07T12:39:45.470Z] Copying: 586/1024 [MB] (25 MBps) [2024-10-07T12:39:46.407Z] Copying: 611/1024 [MB] (25 MBps) [2024-10-07T12:39:47.345Z] Copying: 636/1024 [MB] (24 MBps) [2024-10-07T12:39:48.310Z] Copying: 662/1024 [MB] (25 MBps) [2024-10-07T12:39:49.247Z] Copying: 686/1024 [MB] (24 MBps) [2024-10-07T12:39:50.185Z] Copying: 711/1024 [MB] (25 MBps) [2024-10-07T12:39:51.123Z] Copying: 737/1024 [MB] (25 MBps) [2024-10-07T12:39:52.502Z] Copying: 763/1024 [MB] (25 MBps) [2024-10-07T12:39:53.439Z] Copying: 788/1024 [MB] (24 MBps) [2024-10-07T12:39:54.376Z] Copying: 813/1024 [MB] (24 MBps) [2024-10-07T12:39:55.314Z] Copying: 837/1024 [MB] (24 MBps) [2024-10-07T12:39:56.251Z] Copying: 862/1024 [MB] (24 MBps) [2024-10-07T12:39:57.188Z] Copying: 886/1024 [MB] (24 MBps) [2024-10-07T12:39:58.125Z] Copying: 910/1024 [MB] (23 MBps) [2024-10-07T12:39:59.504Z] Copying: 935/1024 [MB] (24 MBps) [2024-10-07T12:40:00.441Z] Copying: 961/1024 [MB] (25 MBps) [2024-10-07T12:40:01.378Z] Copying: 984/1024 [MB] (23 MBps) [2024-10-07T12:40:01.636Z] Copying: 1010/1024 [MB] (25 MBps) [2024-10-07T12:40:01.897Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-07 12:40:01.660699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.660764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:38.606 [2024-10-07 12:40:01.660784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:38.606 [2024-10-07 12:40:01.660805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.660832] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:38.606 [2024-10-07 12:40:01.668386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.668438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:38.606 [2024-10-07 12:40:01.668459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.544 ms 00:29:38.606 [2024-10-07 12:40:01.668476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.668781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.668801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:38.606 [2024-10-07 12:40:01.668819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:29:38.606 [2024-10-07 12:40:01.668836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.672689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.672831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:38.606 [2024-10-07 12:40:01.672853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.828 ms 00:29:38.606 [2024-10-07 12:40:01.672865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.678485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.678520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:38.606 [2024-10-07 12:40:01.678533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.603 ms 00:29:38.606 [2024-10-07 12:40:01.678544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.714127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.714278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:38.606 [2024-10-07 12:40:01.714298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.574 ms 00:29:38.606 [2024-10-07 12:40:01.714309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.734602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.734643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:38.606 [2024-10-07 12:40:01.734655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.233 ms 00:29:38.606 [2024-10-07 12:40:01.734664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.736807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.736844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:38.606 [2024-10-07 12:40:01.736857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:29:38.606 [2024-10-07 12:40:01.736867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.771656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.771802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:38.606 [2024-10-07 12:40:01.771822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.828 ms 00:29:38.606 [2024-10-07 12:40:01.771832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.806104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.806251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:38.606 [2024-10-07 12:40:01.806270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.220 ms 00:29:38.606 [2024-10-07 12:40:01.806280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.840527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.840686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:38.606 [2024-10-07 12:40:01.840706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.207 ms 00:29:38.606 [2024-10-07 12:40:01.840717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.874609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.606 [2024-10-07 12:40:01.874645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:38.606 [2024-10-07 12:40:01.874656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.874 ms 00:29:38.606 [2024-10-07 12:40:01.874665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.606 [2024-10-07 12:40:01.874699] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:38.606 [2024-10-07 12:40:01.874713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:38.606 [2024-10-07 12:40:01.874725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:38.606 [2024-10-07 12:40:01.874735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:38.606 [2024-10-07 12:40:01.874745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:38.606 [2024-10-07 12:40:01.874755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:38.606 [2024-10-07 12:40:01.874764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:38.606 [2024-10-07 12:40:01.874774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:38.606 [2024-10-07 12:40:01.874784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:38.606 [2024-10-07 12:40:01.874794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:38.606 [2024-10-07 12:40:01.874804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:38.606 [2024-10-07 12:40:01.874813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.874997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:38.607 [2024-10-07 12:40:01.875813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:38.608 [2024-10-07 12:40:01.875823] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cfa8e74c-67c0-4e91-a7e6-4267d33e5f42 00:29:38.608 [2024-10-07 12:40:01.875833] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:38.608 [2024-10-07 12:40:01.875843] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:38.608 [2024-10-07 12:40:01.875852] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:38.608 [2024-10-07 12:40:01.875862] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:38.608 [2024-10-07 12:40:01.875872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:38.608 [2024-10-07 12:40:01.875887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:38.608 [2024-10-07 12:40:01.875897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:38.608 [2024-10-07 12:40:01.875905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:38.608 [2024-10-07 12:40:01.875922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:38.608 [2024-10-07 12:40:01.875932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.608 [2024-10-07 12:40:01.875952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:38.608 [2024-10-07 12:40:01.875963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:29:38.608 [2024-10-07 12:40:01.875973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.608 [2024-10-07 12:40:01.894374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.608 [2024-10-07 12:40:01.894497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:38.608 [2024-10-07 12:40:01.894532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.398 ms 00:29:38.608 [2024-10-07 12:40:01.894548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.608 [2024-10-07 12:40:01.895043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.608 [2024-10-07 12:40:01.895056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:38.608 [2024-10-07 12:40:01.895067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:29:38.608 [2024-10-07 12:40:01.895076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.867 [2024-10-07 12:40:01.936009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.867 [2024-10-07 12:40:01.936046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:38.867 [2024-10-07 12:40:01.936058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.867 [2024-10-07 12:40:01.936068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.867 [2024-10-07 12:40:01.936115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.867 [2024-10-07 12:40:01.936126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:38.867 [2024-10-07 12:40:01.936135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.867 [2024-10-07 12:40:01.936144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.867 [2024-10-07 12:40:01.936217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.867 [2024-10-07 12:40:01.936230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:38.867 [2024-10-07 12:40:01.936240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.867 [2024-10-07 12:40:01.936253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.867 [2024-10-07 12:40:01.936268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.867 [2024-10-07 12:40:01.936278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:38.867 [2024-10-07 12:40:01.936287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.867 [2024-10-07 12:40:01.936296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.867 [2024-10-07 12:40:02.051455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.867 [2024-10-07 12:40:02.051675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:38.867 [2024-10-07 12:40:02.051703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.867 [2024-10-07 12:40:02.051713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.867 [2024-10-07 12:40:02.146594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.867 [2024-10-07 12:40:02.146759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:38.868 [2024-10-07 12:40:02.146780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.868 [2024-10-07 12:40:02.146790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.868 [2024-10-07 12:40:02.146876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.868 [2024-10-07 12:40:02.146889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:38.868 [2024-10-07 12:40:02.146919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.868 [2024-10-07 12:40:02.146930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.868 [2024-10-07 12:40:02.146999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.868 [2024-10-07 12:40:02.147011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:38.868 [2024-10-07 12:40:02.147022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.868 [2024-10-07 12:40:02.147032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.868 [2024-10-07 12:40:02.147139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.868 [2024-10-07 12:40:02.147153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:38.868 [2024-10-07 12:40:02.147163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.868 [2024-10-07 12:40:02.147173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.868 [2024-10-07 12:40:02.147213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.868 [2024-10-07 12:40:02.147225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:38.868 [2024-10-07 12:40:02.147235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.868 [2024-10-07 12:40:02.147246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.868 [2024-10-07 12:40:02.147292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.868 [2024-10-07 12:40:02.147303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:38.868 [2024-10-07 12:40:02.147313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.868 [2024-10-07 12:40:02.147322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.868 [2024-10-07 12:40:02.147365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.868 [2024-10-07 12:40:02.147377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:38.868 [2024-10-07 12:40:02.147387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.868 [2024-10-07 12:40:02.147397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.868 [2024-10-07 12:40:02.147510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 487.574 ms, result 0 00:29:40.246 00:29:40.246 00:29:40.246 12:40:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:42.153 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:42.153 12:40:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:42.153 12:40:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:42.153 12:40:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:42.153 12:40:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:42.153 12:40:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:42.153 12:40:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:42.153 12:40:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:42.153 12:40:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78923 00:29:42.153 12:40:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78923 ']' 00:29:42.153 12:40:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 78923 00:29:42.153 Process with pid 78923 is not found 00:29:42.153 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78923) - No such process 00:29:42.153 12:40:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 78923 is not found' 00:29:42.153 12:40:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:42.422 Remove shared memory files 00:29:42.422 12:40:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:42.422 12:40:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:42.422 12:40:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:42.422 12:40:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:42.422 12:40:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:42.422 12:40:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:42.422 12:40:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:42.422 ************************************ 00:29:42.422 END TEST ftl_dirty_shutdown 00:29:42.422 ************************************ 00:29:42.422 00:29:42.422 real 3m41.631s 00:29:42.422 user 4m11.589s 00:29:42.422 sys 0m39.506s 00:29:42.422 12:40:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:42.422 12:40:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.422 12:40:05 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:42.422 12:40:05 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:42.422 12:40:05 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:42.422 12:40:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:42.423 ************************************ 00:29:42.423 START TEST ftl_upgrade_shutdown 00:29:42.423 ************************************ 00:29:42.423 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:42.733 * Looking for test storage... 00:29:42.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:42.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.733 --rc genhtml_branch_coverage=1 00:29:42.733 --rc genhtml_function_coverage=1 00:29:42.733 --rc genhtml_legend=1 00:29:42.733 --rc geninfo_all_blocks=1 00:29:42.733 --rc geninfo_unexecuted_blocks=1 00:29:42.733 00:29:42.733 ' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:42.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.733 --rc genhtml_branch_coverage=1 00:29:42.733 --rc genhtml_function_coverage=1 00:29:42.733 --rc genhtml_legend=1 00:29:42.733 --rc geninfo_all_blocks=1 00:29:42.733 --rc geninfo_unexecuted_blocks=1 00:29:42.733 00:29:42.733 ' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:42.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.733 --rc genhtml_branch_coverage=1 00:29:42.733 --rc genhtml_function_coverage=1 00:29:42.733 --rc genhtml_legend=1 00:29:42.733 --rc geninfo_all_blocks=1 00:29:42.733 --rc geninfo_unexecuted_blocks=1 00:29:42.733 00:29:42.733 ' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:42.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.733 --rc genhtml_branch_coverage=1 00:29:42.733 --rc genhtml_function_coverage=1 00:29:42.733 --rc genhtml_legend=1 00:29:42.733 --rc geninfo_all_blocks=1 00:29:42.733 --rc geninfo_unexecuted_blocks=1 00:29:42.733 00:29:42.733 ' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81305 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81305 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81305 ']' 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:42.733 12:40:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.993 [2024-10-07 12:40:06.038666] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:29:42.993 [2024-10-07 12:40:06.038967] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81305 ] 00:29:42.993 [2024-10-07 12:40:06.211974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.252 [2024-10-07 12:40:06.402209] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.190 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:44.190 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:44.190 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:44.190 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:44.190 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:44.190 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:44.190 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:44.191 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:44.450 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:44.450 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:44.450 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:44.450 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:29:44.450 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:44.450 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:29:44.450 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:29:44.450 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:44.450 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:44.450 { 00:29:44.450 "name": "basen1", 00:29:44.450 "aliases": [ 00:29:44.450 "291b0d6e-4874-4119-92b0-edc99fb7b669" 00:29:44.450 ], 00:29:44.450 "product_name": "NVMe disk", 00:29:44.450 "block_size": 4096, 00:29:44.450 "num_blocks": 1310720, 00:29:44.450 "uuid": "291b0d6e-4874-4119-92b0-edc99fb7b669", 00:29:44.450 "numa_id": -1, 00:29:44.450 "assigned_rate_limits": { 00:29:44.450 "rw_ios_per_sec": 0, 00:29:44.450 "rw_mbytes_per_sec": 0, 00:29:44.450 "r_mbytes_per_sec": 0, 00:29:44.450 "w_mbytes_per_sec": 0 00:29:44.450 }, 00:29:44.450 "claimed": true, 00:29:44.450 "claim_type": "read_many_write_one", 00:29:44.450 "zoned": false, 00:29:44.450 "supported_io_types": { 00:29:44.450 "read": true, 00:29:44.450 "write": true, 00:29:44.450 "unmap": true, 00:29:44.450 "flush": true, 00:29:44.450 "reset": true, 00:29:44.450 "nvme_admin": true, 00:29:44.450 "nvme_io": true, 00:29:44.450 "nvme_io_md": false, 00:29:44.450 "write_zeroes": true, 00:29:44.450 "zcopy": false, 00:29:44.450 "get_zone_info": false, 00:29:44.450 "zone_management": false, 00:29:44.450 "zone_append": false, 00:29:44.450 "compare": true, 00:29:44.450 "compare_and_write": false, 00:29:44.450 "abort": true, 00:29:44.450 "seek_hole": false, 00:29:44.450 "seek_data": false, 00:29:44.450 "copy": true, 00:29:44.450 "nvme_iov_md": false 00:29:44.450 }, 00:29:44.450 "driver_specific": { 00:29:44.450 "nvme": [ 00:29:44.450 { 00:29:44.450 "pci_address": "0000:00:11.0", 00:29:44.450 "trid": { 00:29:44.450 "trtype": "PCIe", 00:29:44.450 "traddr": "0000:00:11.0" 00:29:44.450 }, 00:29:44.450 "ctrlr_data": { 00:29:44.450 "cntlid": 0, 00:29:44.450 "vendor_id": "0x1b36", 00:29:44.450 "model_number": "QEMU NVMe Ctrl", 00:29:44.450 "serial_number": "12341", 00:29:44.450 "firmware_revision": "8.0.0", 00:29:44.450 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:44.450 "oacs": { 00:29:44.450 "security": 0, 00:29:44.450 "format": 1, 00:29:44.450 "firmware": 0, 00:29:44.450 "ns_manage": 1 00:29:44.450 }, 00:29:44.450 "multi_ctrlr": false, 00:29:44.450 "ana_reporting": false 00:29:44.450 }, 00:29:44.450 "vs": { 00:29:44.450 "nvme_version": "1.4" 00:29:44.450 }, 00:29:44.450 "ns_data": { 00:29:44.450 "id": 1, 00:29:44.450 "can_share": false 00:29:44.450 } 00:29:44.450 } 00:29:44.450 ], 00:29:44.450 "mp_policy": "active_passive" 00:29:44.450 } 00:29:44.450 } 00:29:44.450 ]' 00:29:44.450 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:44.710 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:29:44.710 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:44.710 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:44.710 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:44.710 12:40:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:29:44.710 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:44.710 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:44.710 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:44.710 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:44.710 12:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:44.969 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=3028ca6c-5bd8-4f23-8f9d-126b1581a918 00:29:44.969 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:44.969 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3028ca6c-5bd8-4f23-8f9d-126b1581a918 00:29:44.969 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:45.229 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1e7e79bf-99f0-4bd5-bba6-de2d8da78248 00:29:45.229 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1e7e79bf-99f0-4bd5-bba6-de2d8da78248 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=2113e905-e0d9-453b-ac67-fc0861fdeeef 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 2113e905-e0d9-453b-ac67-fc0861fdeeef ]] 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 2113e905-e0d9-453b-ac67-fc0861fdeeef 5120 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=2113e905-e0d9-453b-ac67-fc0861fdeeef 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2113e905-e0d9-453b-ac67-fc0861fdeeef 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2113e905-e0d9-453b-ac67-fc0861fdeeef 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:29:45.488 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2113e905-e0d9-453b-ac67-fc0861fdeeef 00:29:45.748 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:45.748 { 00:29:45.748 "name": "2113e905-e0d9-453b-ac67-fc0861fdeeef", 00:29:45.748 "aliases": [ 00:29:45.748 "lvs/basen1p0" 00:29:45.748 ], 00:29:45.748 "product_name": "Logical Volume", 00:29:45.748 "block_size": 4096, 00:29:45.748 "num_blocks": 5242880, 00:29:45.748 "uuid": "2113e905-e0d9-453b-ac67-fc0861fdeeef", 00:29:45.748 "assigned_rate_limits": { 00:29:45.748 "rw_ios_per_sec": 0, 00:29:45.748 "rw_mbytes_per_sec": 0, 00:29:45.748 "r_mbytes_per_sec": 0, 00:29:45.748 "w_mbytes_per_sec": 0 00:29:45.748 }, 00:29:45.748 "claimed": false, 00:29:45.748 "zoned": false, 00:29:45.748 "supported_io_types": { 00:29:45.748 "read": true, 00:29:45.748 "write": true, 00:29:45.748 "unmap": true, 00:29:45.748 "flush": false, 00:29:45.748 "reset": true, 00:29:45.748 "nvme_admin": false, 00:29:45.748 "nvme_io": false, 00:29:45.748 "nvme_io_md": false, 00:29:45.748 "write_zeroes": true, 00:29:45.748 "zcopy": false, 00:29:45.748 "get_zone_info": false, 00:29:45.748 "zone_management": false, 00:29:45.748 "zone_append": false, 00:29:45.748 "compare": false, 00:29:45.748 "compare_and_write": false, 00:29:45.748 "abort": false, 00:29:45.748 "seek_hole": true, 00:29:45.748 "seek_data": true, 00:29:45.748 "copy": false, 00:29:45.748 "nvme_iov_md": false 00:29:45.748 }, 00:29:45.748 "driver_specific": { 00:29:45.748 "lvol": { 00:29:45.748 "lvol_store_uuid": "1e7e79bf-99f0-4bd5-bba6-de2d8da78248", 00:29:45.748 "base_bdev": "basen1", 00:29:45.748 "thin_provision": true, 00:29:45.748 "num_allocated_clusters": 0, 00:29:45.748 "snapshot": false, 00:29:45.748 "clone": false, 00:29:45.748 "esnap_clone": false 00:29:45.748 } 00:29:45.748 } 00:29:45.748 } 00:29:45.748 ]' 00:29:45.748 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:45.748 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:29:45.748 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:45.748 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:29:45.748 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:29:45.748 12:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:29:45.748 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:45.748 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:45.748 12:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:46.008 12:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:46.008 12:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:46.008 12:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:46.267 12:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:46.267 12:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:46.267 12:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 2113e905-e0d9-453b-ac67-fc0861fdeeef -c cachen1p0 --l2p_dram_limit 2 00:29:46.267 [2024-10-07 12:40:09.517355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.517403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:46.267 [2024-10-07 12:40:09.517421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:46.267 [2024-10-07 12:40:09.517448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.517500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.517512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:46.267 [2024-10-07 12:40:09.517525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:46.267 [2024-10-07 12:40:09.517536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.517561] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:46.267 [2024-10-07 12:40:09.518582] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:46.267 [2024-10-07 12:40:09.518616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.518635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:46.267 [2024-10-07 12:40:09.518651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.060 ms 00:29:46.267 [2024-10-07 12:40:09.518664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.518741] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 5676c5d0-ea0a-43d4-8ab1-2a51fc8a962c 00:29:46.267 [2024-10-07 12:40:09.520239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.520276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:46.267 [2024-10-07 12:40:09.520289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:46.267 [2024-10-07 12:40:09.520302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.527786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.527818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:46.267 [2024-10-07 12:40:09.527831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.450 ms 00:29:46.267 [2024-10-07 12:40:09.527843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.527888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.527915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:46.267 [2024-10-07 12:40:09.527927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:46.267 [2024-10-07 12:40:09.527961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.528028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.528054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:46.267 [2024-10-07 12:40:09.528065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:46.267 [2024-10-07 12:40:09.528077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.528100] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:46.267 [2024-10-07 12:40:09.533197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.533225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:46.267 [2024-10-07 12:40:09.533239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.108 ms 00:29:46.267 [2024-10-07 12:40:09.533249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.533279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.533289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:46.267 [2024-10-07 12:40:09.533301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:46.267 [2024-10-07 12:40:09.533313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.533348] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:46.267 [2024-10-07 12:40:09.533463] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:46.267 [2024-10-07 12:40:09.533481] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:46.267 [2024-10-07 12:40:09.533493] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:46.267 [2024-10-07 12:40:09.533511] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:46.267 [2024-10-07 12:40:09.533522] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:46.267 [2024-10-07 12:40:09.533534] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:46.267 [2024-10-07 12:40:09.533544] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:46.267 [2024-10-07 12:40:09.533555] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:46.267 [2024-10-07 12:40:09.533564] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:46.267 [2024-10-07 12:40:09.533576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.533586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:46.267 [2024-10-07 12:40:09.533598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:29:46.267 [2024-10-07 12:40:09.533608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.533675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.267 [2024-10-07 12:40:09.533699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:46.267 [2024-10-07 12:40:09.533712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:29:46.267 [2024-10-07 12:40:09.533721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.267 [2024-10-07 12:40:09.533803] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:46.267 [2024-10-07 12:40:09.533814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:46.267 [2024-10-07 12:40:09.533826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:46.267 [2024-10-07 12:40:09.533836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.533847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:46.267 [2024-10-07 12:40:09.533856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.533868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:46.267 [2024-10-07 12:40:09.533876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:46.267 [2024-10-07 12:40:09.533887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:46.267 [2024-10-07 12:40:09.533896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.533937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:46.267 [2024-10-07 12:40:09.533947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:46.267 [2024-10-07 12:40:09.533960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.533969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:46.267 [2024-10-07 12:40:09.533981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:46.267 [2024-10-07 12:40:09.533990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.534004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:46.267 [2024-10-07 12:40:09.534013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:46.267 [2024-10-07 12:40:09.534043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.534052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:46.267 [2024-10-07 12:40:09.534064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:46.267 [2024-10-07 12:40:09.534073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:46.267 [2024-10-07 12:40:09.534086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:46.267 [2024-10-07 12:40:09.534100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:46.267 [2024-10-07 12:40:09.534111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:46.267 [2024-10-07 12:40:09.534120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:46.267 [2024-10-07 12:40:09.534131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:46.267 [2024-10-07 12:40:09.534140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:46.267 [2024-10-07 12:40:09.534151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:46.267 [2024-10-07 12:40:09.534160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:46.267 [2024-10-07 12:40:09.534171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:46.267 [2024-10-07 12:40:09.534180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:46.267 [2024-10-07 12:40:09.534193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:46.267 [2024-10-07 12:40:09.534202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.534213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:46.267 [2024-10-07 12:40:09.534222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:46.267 [2024-10-07 12:40:09.534233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.534242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:46.267 [2024-10-07 12:40:09.534253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:46.267 [2024-10-07 12:40:09.534262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.534273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:46.267 [2024-10-07 12:40:09.534281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:46.267 [2024-10-07 12:40:09.534292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.534301] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:46.267 [2024-10-07 12:40:09.534330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:46.267 [2024-10-07 12:40:09.534343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:46.267 [2024-10-07 12:40:09.534355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:46.267 [2024-10-07 12:40:09.534364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:46.267 [2024-10-07 12:40:09.534380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:46.267 [2024-10-07 12:40:09.534389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:46.267 [2024-10-07 12:40:09.534400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:46.267 [2024-10-07 12:40:09.534409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:46.267 [2024-10-07 12:40:09.534421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:46.267 [2024-10-07 12:40:09.534434] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:46.267 [2024-10-07 12:40:09.534449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:46.267 [2024-10-07 12:40:09.534460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:46.267 [2024-10-07 12:40:09.534473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:46.267 [2024-10-07 12:40:09.534484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:46.267 [2024-10-07 12:40:09.534496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:46.267 [2024-10-07 12:40:09.534506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:46.268 [2024-10-07 12:40:09.534520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:46.268 [2024-10-07 12:40:09.534530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:46.268 [2024-10-07 12:40:09.534543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:46.268 [2024-10-07 12:40:09.534553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:46.268 [2024-10-07 12:40:09.534568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:46.268 [2024-10-07 12:40:09.534579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:46.268 [2024-10-07 12:40:09.534591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:46.268 [2024-10-07 12:40:09.534601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:46.268 [2024-10-07 12:40:09.534614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:46.268 [2024-10-07 12:40:09.534624] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:46.268 [2024-10-07 12:40:09.534638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:46.268 [2024-10-07 12:40:09.534649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:46.268 [2024-10-07 12:40:09.534663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:46.268 [2024-10-07 12:40:09.534674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:46.268 [2024-10-07 12:40:09.534687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:46.268 [2024-10-07 12:40:09.534697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.268 [2024-10-07 12:40:09.534711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:46.268 [2024-10-07 12:40:09.534721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.950 ms 00:29:46.268 [2024-10-07 12:40:09.534733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.268 [2024-10-07 12:40:09.534778] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:46.268 [2024-10-07 12:40:09.534795] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:50.463 [2024-10-07 12:40:13.321889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.321962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:50.463 [2024-10-07 12:40:13.321978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3793.259 ms 00:29:50.463 [2024-10-07 12:40:13.322007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.357860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.358081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:50.463 [2024-10-07 12:40:13.358191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.624 ms 00:29:50.463 [2024-10-07 12:40:13.358235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.358340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.358447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:50.463 [2024-10-07 12:40:13.358541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:50.463 [2024-10-07 12:40:13.358584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.415995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.416183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:50.463 [2024-10-07 12:40:13.416322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.439 ms 00:29:50.463 [2024-10-07 12:40:13.416382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.416449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.416605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:50.463 [2024-10-07 12:40:13.416623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:50.463 [2024-10-07 12:40:13.416640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.417207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.417233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:50.463 [2024-10-07 12:40:13.417260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.474 ms 00:29:50.463 [2024-10-07 12:40:13.417282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.417332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.417350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:50.463 [2024-10-07 12:40:13.417363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:50.463 [2024-10-07 12:40:13.417383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.437905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.438080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:50.463 [2024-10-07 12:40:13.438197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.530 ms 00:29:50.463 [2024-10-07 12:40:13.438240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.450063] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:50.463 [2024-10-07 12:40:13.451267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.451458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:50.463 [2024-10-07 12:40:13.451550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.944 ms 00:29:50.463 [2024-10-07 12:40:13.451586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.484557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.484685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:50.463 [2024-10-07 12:40:13.484782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.970 ms 00:29:50.463 [2024-10-07 12:40:13.484819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.484979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.485022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:50.463 [2024-10-07 12:40:13.485102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:29:50.463 [2024-10-07 12:40:13.485136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.519328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.519451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:50.463 [2024-10-07 12:40:13.519540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.165 ms 00:29:50.463 [2024-10-07 12:40:13.519576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.553470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.553594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:50.463 [2024-10-07 12:40:13.553683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.887 ms 00:29:50.463 [2024-10-07 12:40:13.553698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.554367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.554391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:50.463 [2024-10-07 12:40:13.554406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.633 ms 00:29:50.463 [2024-10-07 12:40:13.554416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.652700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.652737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:50.463 [2024-10-07 12:40:13.652757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 98.387 ms 00:29:50.463 [2024-10-07 12:40:13.652770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.687566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.687613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:50.463 [2024-10-07 12:40:13.687630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.775 ms 00:29:50.463 [2024-10-07 12:40:13.687656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.463 [2024-10-07 12:40:13.723371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.463 [2024-10-07 12:40:13.723410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:50.463 [2024-10-07 12:40:13.723426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.730 ms 00:29:50.463 [2024-10-07 12:40:13.723437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.723 [2024-10-07 12:40:13.758743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.723 [2024-10-07 12:40:13.758904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:50.723 [2024-10-07 12:40:13.758936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.319 ms 00:29:50.723 [2024-10-07 12:40:13.758946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.723 [2024-10-07 12:40:13.759018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.723 [2024-10-07 12:40:13.759031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:50.723 [2024-10-07 12:40:13.759051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:50.723 [2024-10-07 12:40:13.759062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.723 [2024-10-07 12:40:13.759161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.723 [2024-10-07 12:40:13.759173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:50.723 [2024-10-07 12:40:13.759186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:50.723 [2024-10-07 12:40:13.759196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.723 [2024-10-07 12:40:13.760177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4249.272 ms, result 0 00:29:50.723 { 00:29:50.723 "name": "ftl", 00:29:50.723 "uuid": "5676c5d0-ea0a-43d4-8ab1-2a51fc8a962c" 00:29:50.723 } 00:29:50.723 12:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:50.723 [2024-10-07 12:40:13.971196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.723 12:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:50.982 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:51.241 [2024-10-07 12:40:14.371233] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:51.241 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:51.501 [2024-10-07 12:40:14.596871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:51.501 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:51.760 Fill FTL, iteration 1 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81438 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81438 /var/tmp/spdk.tgt.sock 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81438 ']' 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:51.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:51.760 12:40:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:52.020 [2024-10-07 12:40:15.069049] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:29:52.020 [2024-10-07 12:40:15.069180] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81438 ] 00:29:52.020 [2024-10-07 12:40:15.243290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.279 [2024-10-07 12:40:15.440642] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.217 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:53.217 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:53.217 12:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:53.477 ftln1 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81438 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81438 ']' 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81438 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81438 00:29:53.477 killing process with pid 81438 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81438' 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81438 00:29:53.477 12:40:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81438 00:29:56.041 12:40:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:56.041 12:40:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:56.041 [2024-10-07 12:40:19.221084] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:29:56.041 [2024-10-07 12:40:19.221214] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81491 ] 00:29:56.300 [2024-10-07 12:40:19.393651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.300 [2024-10-07 12:40:19.585273] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.204  [2024-10-07T12:40:22.063Z] Copying: 243/1024 [MB] (243 MBps) [2024-10-07T12:40:23.441Z] Copying: 490/1024 [MB] (247 MBps) [2024-10-07T12:40:24.378Z] Copying: 736/1024 [MB] (246 MBps) [2024-10-07T12:40:24.378Z] Copying: 977/1024 [MB] (241 MBps) [2024-10-07T12:40:25.756Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:30:02.465 00:30:02.465 Calculate MD5 checksum, iteration 1 00:30:02.465 12:40:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:02.466 12:40:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:02.466 12:40:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:02.466 12:40:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:02.466 12:40:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:02.466 12:40:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:02.466 12:40:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:02.466 12:40:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:02.466 [2024-10-07 12:40:25.566848] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:30:02.466 [2024-10-07 12:40:25.567329] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81556 ] 00:30:02.466 [2024-10-07 12:40:25.738681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.724 [2024-10-07 12:40:25.931669] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.103  [2024-10-07T12:40:28.370Z] Copying: 607/1024 [MB] (607 MBps) [2024-10-07T12:40:29.305Z] Copying: 1024/1024 [MB] (average 603 MBps) 00:30:06.014 00:30:06.015 12:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:06.015 12:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:07.918 Fill FTL, iteration 2 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=97c4711825b774d6bfd0117af3060c0c 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:07.918 12:40:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:07.918 [2024-10-07 12:40:30.797859] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:30:07.918 [2024-10-07 12:40:30.798152] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81616 ] 00:30:07.918 [2024-10-07 12:40:30.966604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.918 [2024-10-07 12:40:31.152391] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.326  [2024-10-07T12:40:33.995Z] Copying: 247/1024 [MB] (247 MBps) [2024-10-07T12:40:34.932Z] Copying: 493/1024 [MB] (246 MBps) [2024-10-07T12:40:35.869Z] Copying: 732/1024 [MB] (239 MBps) [2024-10-07T12:40:35.869Z] Copying: 973/1024 [MB] (241 MBps) [2024-10-07T12:40:37.247Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:30:13.956 00:30:13.956 12:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:13.956 12:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:13.956 Calculate MD5 checksum, iteration 2 00:30:13.956 12:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:13.956 12:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:13.956 12:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:13.956 12:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:13.956 12:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:13.956 12:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:13.956 [2024-10-07 12:40:37.127909] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:30:13.956 [2024-10-07 12:40:37.128684] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81680 ] 00:30:14.215 [2024-10-07 12:40:37.299495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.215 [2024-10-07 12:40:37.491159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.121  [2024-10-07T12:40:39.980Z] Copying: 613/1024 [MB] (613 MBps) [2024-10-07T12:40:41.363Z] Copying: 1024/1024 [MB] (average 608 MBps) 00:30:18.072 00:30:18.072 12:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:18.072 12:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:19.976 12:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:19.977 12:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2ecbd19344c2ed7ac43cc3807bac3dbc 00:30:19.977 12:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:19.977 12:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:19.977 12:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:19.977 [2024-10-07 12:40:43.012151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.977 [2024-10-07 12:40:43.012218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:19.977 [2024-10-07 12:40:43.012236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:19.977 [2024-10-07 12:40:43.012252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.977 [2024-10-07 12:40:43.012278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.977 [2024-10-07 12:40:43.012290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:19.977 [2024-10-07 12:40:43.012302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:19.977 [2024-10-07 12:40:43.012311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.977 [2024-10-07 12:40:43.012331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.977 [2024-10-07 12:40:43.012342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:19.977 [2024-10-07 12:40:43.012352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:19.977 [2024-10-07 12:40:43.012362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.977 [2024-10-07 12:40:43.012438] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.267 ms, result 0 00:30:19.977 true 00:30:19.977 12:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:19.977 { 00:30:19.977 "name": "ftl", 00:30:19.977 "properties": [ 00:30:19.977 { 00:30:19.977 "name": "superblock_version", 00:30:19.977 "value": 5, 00:30:19.977 "read-only": true 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "name": "base_device", 00:30:19.977 "bands": [ 00:30:19.977 { 00:30:19.977 "id": 0, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 1, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 2, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 3, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 4, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 5, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 6, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 7, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 8, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 9, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 10, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 11, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 12, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 13, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 14, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 15, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 16, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 17, 00:30:19.977 "state": "FREE", 00:30:19.977 "validity": 0.0 00:30:19.977 } 00:30:19.977 ], 00:30:19.977 "read-only": true 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "name": "cache_device", 00:30:19.977 "type": "bdev", 00:30:19.977 "chunks": [ 00:30:19.977 { 00:30:19.977 "id": 0, 00:30:19.977 "state": "INACTIVE", 00:30:19.977 "utilization": 0.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 1, 00:30:19.977 "state": "CLOSED", 00:30:19.977 "utilization": 1.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 2, 00:30:19.977 "state": "CLOSED", 00:30:19.977 "utilization": 1.0 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 3, 00:30:19.977 "state": "OPEN", 00:30:19.977 "utilization": 0.001953125 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "id": 4, 00:30:19.977 "state": "OPEN", 00:30:19.977 "utilization": 0.0 00:30:19.977 } 00:30:19.977 ], 00:30:19.977 "read-only": true 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "name": "verbose_mode", 00:30:19.977 "value": true, 00:30:19.977 "unit": "", 00:30:19.977 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:19.977 }, 00:30:19.977 { 00:30:19.977 "name": "prep_upgrade_on_shutdown", 00:30:19.977 "value": false, 00:30:19.977 "unit": "", 00:30:19.977 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:19.977 } 00:30:19.977 ] 00:30:19.977 } 00:30:19.977 12:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:20.236 [2024-10-07 12:40:43.412075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.236 [2024-10-07 12:40:43.412119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:20.237 [2024-10-07 12:40:43.412133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:20.237 [2024-10-07 12:40:43.412143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.237 [2024-10-07 12:40:43.412167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.237 [2024-10-07 12:40:43.412178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:20.237 [2024-10-07 12:40:43.412189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:20.237 [2024-10-07 12:40:43.412199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.237 [2024-10-07 12:40:43.412219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.237 [2024-10-07 12:40:43.412229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:20.237 [2024-10-07 12:40:43.412241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:20.237 [2024-10-07 12:40:43.412261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.237 [2024-10-07 12:40:43.412315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.228 ms, result 0 00:30:20.237 true 00:30:20.237 12:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:20.237 12:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:20.237 12:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:20.496 12:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:20.496 12:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:20.496 12:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:20.756 [2024-10-07 12:40:43.807819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.756 [2024-10-07 12:40:43.807864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:20.756 [2024-10-07 12:40:43.807878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:20.756 [2024-10-07 12:40:43.807889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.756 [2024-10-07 12:40:43.807927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.756 [2024-10-07 12:40:43.807939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:20.756 [2024-10-07 12:40:43.807949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:20.756 [2024-10-07 12:40:43.807959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.756 [2024-10-07 12:40:43.807978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.756 [2024-10-07 12:40:43.807989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:20.756 [2024-10-07 12:40:43.807999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:20.756 [2024-10-07 12:40:43.808009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.756 [2024-10-07 12:40:43.808074] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.232 ms, result 0 00:30:20.756 true 00:30:20.756 12:40:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:20.756 { 00:30:20.756 "name": "ftl", 00:30:20.756 "properties": [ 00:30:20.756 { 00:30:20.756 "name": "superblock_version", 00:30:20.756 "value": 5, 00:30:20.756 "read-only": true 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "name": "base_device", 00:30:20.756 "bands": [ 00:30:20.756 { 00:30:20.756 "id": 0, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 1, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 2, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 3, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 4, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 5, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 6, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 7, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 8, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 9, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 10, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 11, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 12, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 13, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 14, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 15, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 16, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 17, 00:30:20.756 "state": "FREE", 00:30:20.756 "validity": 0.0 00:30:20.756 } 00:30:20.756 ], 00:30:20.756 "read-only": true 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "name": "cache_device", 00:30:20.756 "type": "bdev", 00:30:20.756 "chunks": [ 00:30:20.756 { 00:30:20.756 "id": 0, 00:30:20.756 "state": "INACTIVE", 00:30:20.756 "utilization": 0.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 1, 00:30:20.756 "state": "CLOSED", 00:30:20.756 "utilization": 1.0 00:30:20.756 }, 00:30:20.756 { 00:30:20.756 "id": 2, 00:30:20.756 "state": "CLOSED", 00:30:20.757 "utilization": 1.0 00:30:20.757 }, 00:30:20.757 { 00:30:20.757 "id": 3, 00:30:20.757 "state": "OPEN", 00:30:20.757 "utilization": 0.001953125 00:30:20.757 }, 00:30:20.757 { 00:30:20.757 "id": 4, 00:30:20.757 "state": "OPEN", 00:30:20.757 "utilization": 0.0 00:30:20.757 } 00:30:20.757 ], 00:30:20.757 "read-only": true 00:30:20.757 }, 00:30:20.757 { 00:30:20.757 "name": "verbose_mode", 00:30:20.757 "value": true, 00:30:20.757 "unit": "", 00:30:20.757 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:20.757 }, 00:30:20.757 { 00:30:20.757 "name": "prep_upgrade_on_shutdown", 00:30:20.757 "value": true, 00:30:20.757 "unit": "", 00:30:20.757 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:20.757 } 00:30:20.757 ] 00:30:20.757 } 00:30:20.757 12:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:20.757 12:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81305 ]] 00:30:20.757 12:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81305 00:30:20.757 12:40:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81305 ']' 00:30:20.757 12:40:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81305 00:30:20.757 12:40:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:30:20.757 12:40:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:20.757 12:40:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81305 00:30:21.016 killing process with pid 81305 00:30:21.016 12:40:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:21.016 12:40:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:21.016 12:40:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81305' 00:30:21.016 12:40:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81305 00:30:21.016 12:40:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81305 00:30:21.955 [2024-10-07 12:40:45.220442] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:21.955 [2024-10-07 12:40:45.240396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.955 [2024-10-07 12:40:45.240438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:21.955 [2024-10-07 12:40:45.240455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:21.955 [2024-10-07 12:40:45.240466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.955 [2024-10-07 12:40:45.240490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:21.955 [2024-10-07 12:40:45.245277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.955 [2024-10-07 12:40:45.245318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:21.955 [2024-10-07 12:40:45.245333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.777 ms 00:30:21.955 [2024-10-07 12:40:45.245344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.110 [2024-10-07 12:40:52.444169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.444239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:30.111 [2024-10-07 12:40:52.444258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7210.479 ms 00:30:30.111 [2024-10-07 12:40:52.444270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.445483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.445519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:30.111 [2024-10-07 12:40:52.445531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.197 ms 00:30:30.111 [2024-10-07 12:40:52.445542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.446421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.446443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:30.111 [2024-10-07 12:40:52.446455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.842 ms 00:30:30.111 [2024-10-07 12:40:52.446465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.461016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.461054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:30.111 [2024-10-07 12:40:52.461067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.538 ms 00:30:30.111 [2024-10-07 12:40:52.461088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.470166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.470203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:30.111 [2024-10-07 12:40:52.470216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.057 ms 00:30:30.111 [2024-10-07 12:40:52.470234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.470313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.470326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:30.111 [2024-10-07 12:40:52.470338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:30:30.111 [2024-10-07 12:40:52.470348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.484492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.484526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:30.111 [2024-10-07 12:40:52.484538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.149 ms 00:30:30.111 [2024-10-07 12:40:52.484548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.498000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.498031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:30.111 [2024-10-07 12:40:52.498043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.440 ms 00:30:30.111 [2024-10-07 12:40:52.498053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.511538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.511751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:30.111 [2024-10-07 12:40:52.511771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.473 ms 00:30:30.111 [2024-10-07 12:40:52.511782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.525408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.525440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:30.111 [2024-10-07 12:40:52.525452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.535 ms 00:30:30.111 [2024-10-07 12:40:52.525462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.525494] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:30.111 [2024-10-07 12:40:52.525511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:30.111 [2024-10-07 12:40:52.525523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:30.111 [2024-10-07 12:40:52.525534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:30.111 [2024-10-07 12:40:52.525545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:30.111 [2024-10-07 12:40:52.525717] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:30.111 [2024-10-07 12:40:52.525727] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5676c5d0-ea0a-43d4-8ab1-2a51fc8a962c 00:30:30.111 [2024-10-07 12:40:52.525738] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:30.111 [2024-10-07 12:40:52.525752] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:30.111 [2024-10-07 12:40:52.525762] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:30.111 [2024-10-07 12:40:52.525778] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:30.111 [2024-10-07 12:40:52.525788] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:30.111 [2024-10-07 12:40:52.525798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:30.111 [2024-10-07 12:40:52.525808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:30.111 [2024-10-07 12:40:52.525817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:30.111 [2024-10-07 12:40:52.525827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:30.111 [2024-10-07 12:40:52.525837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.525848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:30.111 [2024-10-07 12:40:52.525859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.344 ms 00:30:30.111 [2024-10-07 12:40:52.525869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.545939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.545970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:30.111 [2024-10-07 12:40:52.545982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.074 ms 00:30:30.111 [2024-10-07 12:40:52.545992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.546573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.111 [2024-10-07 12:40:52.546585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:30.111 [2024-10-07 12:40:52.546596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:30:30.111 [2024-10-07 12:40:52.546606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.606820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.111 [2024-10-07 12:40:52.606854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:30.111 [2024-10-07 12:40:52.606868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.111 [2024-10-07 12:40:52.606878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.606930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.111 [2024-10-07 12:40:52.606943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:30.111 [2024-10-07 12:40:52.606961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.111 [2024-10-07 12:40:52.606972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.607074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.111 [2024-10-07 12:40:52.607089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:30.111 [2024-10-07 12:40:52.607100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.111 [2024-10-07 12:40:52.607111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.607129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.111 [2024-10-07 12:40:52.607140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:30.111 [2024-10-07 12:40:52.607152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.111 [2024-10-07 12:40:52.607162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.732945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.111 [2024-10-07 12:40:52.732999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:30.111 [2024-10-07 12:40:52.733015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.111 [2024-10-07 12:40:52.733025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.831977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.111 [2024-10-07 12:40:52.832028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:30.111 [2024-10-07 12:40:52.832043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.111 [2024-10-07 12:40:52.832054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.832178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.111 [2024-10-07 12:40:52.832198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:30.111 [2024-10-07 12:40:52.832210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.111 [2024-10-07 12:40:52.832220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.832270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.111 [2024-10-07 12:40:52.832282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:30.111 [2024-10-07 12:40:52.832292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.111 [2024-10-07 12:40:52.832302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.111 [2024-10-07 12:40:52.832419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.112 [2024-10-07 12:40:52.832437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:30.112 [2024-10-07 12:40:52.832448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.112 [2024-10-07 12:40:52.832458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.112 [2024-10-07 12:40:52.832497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.112 [2024-10-07 12:40:52.832510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:30.112 [2024-10-07 12:40:52.832521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.112 [2024-10-07 12:40:52.832532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.112 [2024-10-07 12:40:52.832577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.112 [2024-10-07 12:40:52.832588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:30.112 [2024-10-07 12:40:52.832603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.112 [2024-10-07 12:40:52.832613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.112 [2024-10-07 12:40:52.832666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:30.112 [2024-10-07 12:40:52.832678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:30.112 [2024-10-07 12:40:52.832689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:30.112 [2024-10-07 12:40:52.832699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.112 [2024-10-07 12:40:52.832845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7604.739 ms, result 0 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81879 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81879 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81879 ']' 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:35.415 12:40:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:35.415 [2024-10-07 12:40:58.409942] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:30:35.415 [2024-10-07 12:40:58.410360] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81879 ] 00:30:35.415 [2024-10-07 12:40:58.578859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.674 [2024-10-07 12:40:58.763762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.612 [2024-10-07 12:40:59.700223] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:36.612 [2024-10-07 12:40:59.700457] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:36.612 [2024-10-07 12:40:59.845660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.612 [2024-10-07 12:40:59.845708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:36.612 [2024-10-07 12:40:59.845723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:36.612 [2024-10-07 12:40:59.845732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.612 [2024-10-07 12:40:59.845779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.612 [2024-10-07 12:40:59.845790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:36.612 [2024-10-07 12:40:59.845800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:36.612 [2024-10-07 12:40:59.845809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.612 [2024-10-07 12:40:59.845838] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:36.612 [2024-10-07 12:40:59.846766] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:36.612 [2024-10-07 12:40:59.846789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.612 [2024-10-07 12:40:59.846799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:36.612 [2024-10-07 12:40:59.846810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.964 ms 00:30:36.612 [2024-10-07 12:40:59.846823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.612 [2024-10-07 12:40:59.848321] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:36.612 [2024-10-07 12:40:59.866722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.612 [2024-10-07 12:40:59.866761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:36.612 [2024-10-07 12:40:59.866775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.432 ms 00:30:36.612 [2024-10-07 12:40:59.866785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.612 [2024-10-07 12:40:59.866845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.612 [2024-10-07 12:40:59.866856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:36.612 [2024-10-07 12:40:59.866866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:36.612 [2024-10-07 12:40:59.866875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.612 [2024-10-07 12:40:59.873826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.612 [2024-10-07 12:40:59.873856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:36.612 [2024-10-07 12:40:59.873868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.843 ms 00:30:36.612 [2024-10-07 12:40:59.873877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.612 [2024-10-07 12:40:59.873965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.613 [2024-10-07 12:40:59.873979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:36.613 [2024-10-07 12:40:59.873993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:30:36.613 [2024-10-07 12:40:59.874004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.613 [2024-10-07 12:40:59.874048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.613 [2024-10-07 12:40:59.874059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:36.613 [2024-10-07 12:40:59.874070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:36.613 [2024-10-07 12:40:59.874080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.613 [2024-10-07 12:40:59.874105] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:36.613 [2024-10-07 12:40:59.878912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.613 [2024-10-07 12:40:59.878941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:36.613 [2024-10-07 12:40:59.878961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.820 ms 00:30:36.613 [2024-10-07 12:40:59.878986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.613 [2024-10-07 12:40:59.879013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.613 [2024-10-07 12:40:59.879023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:36.613 [2024-10-07 12:40:59.879038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:36.613 [2024-10-07 12:40:59.879047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.613 [2024-10-07 12:40:59.879101] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:36.613 [2024-10-07 12:40:59.879125] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:36.613 [2024-10-07 12:40:59.879160] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:36.613 [2024-10-07 12:40:59.879176] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:36.613 [2024-10-07 12:40:59.879262] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:36.613 [2024-10-07 12:40:59.879278] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:36.613 [2024-10-07 12:40:59.879291] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:36.613 [2024-10-07 12:40:59.879303] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:36.613 [2024-10-07 12:40:59.879331] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:36.613 [2024-10-07 12:40:59.879342] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:36.613 [2024-10-07 12:40:59.879351] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:36.613 [2024-10-07 12:40:59.879361] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:36.613 [2024-10-07 12:40:59.879370] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:36.613 [2024-10-07 12:40:59.879381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.613 [2024-10-07 12:40:59.879391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:36.613 [2024-10-07 12:40:59.879401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.282 ms 00:30:36.613 [2024-10-07 12:40:59.879414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.613 [2024-10-07 12:40:59.879487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.613 [2024-10-07 12:40:59.879497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:36.613 [2024-10-07 12:40:59.879507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:36.613 [2024-10-07 12:40:59.879517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.613 [2024-10-07 12:40:59.879606] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:36.613 [2024-10-07 12:40:59.879618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:36.613 [2024-10-07 12:40:59.879629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:36.613 [2024-10-07 12:40:59.879639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.879652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:36.613 [2024-10-07 12:40:59.879662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.879671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:36.613 [2024-10-07 12:40:59.879681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:36.613 [2024-10-07 12:40:59.879690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:36.613 [2024-10-07 12:40:59.879699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.879708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:36.613 [2024-10-07 12:40:59.879717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:36.613 [2024-10-07 12:40:59.879726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.879735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:36.613 [2024-10-07 12:40:59.879744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:36.613 [2024-10-07 12:40:59.879753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.879763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:36.613 [2024-10-07 12:40:59.879772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:36.613 [2024-10-07 12:40:59.879780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.879790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:36.613 [2024-10-07 12:40:59.879799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:36.613 [2024-10-07 12:40:59.879808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:36.613 [2024-10-07 12:40:59.879817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:36.613 [2024-10-07 12:40:59.879826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:36.613 [2024-10-07 12:40:59.879845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:36.613 [2024-10-07 12:40:59.879854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:36.613 [2024-10-07 12:40:59.879864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:36.613 [2024-10-07 12:40:59.879873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:36.613 [2024-10-07 12:40:59.879882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:36.613 [2024-10-07 12:40:59.879891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:36.613 [2024-10-07 12:40:59.879900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:36.613 [2024-10-07 12:40:59.879909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:36.613 [2024-10-07 12:40:59.879918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:36.613 [2024-10-07 12:40:59.879927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.879949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:36.613 [2024-10-07 12:40:59.879959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:36.613 [2024-10-07 12:40:59.879968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.879977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:36.613 [2024-10-07 12:40:59.879986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:36.613 [2024-10-07 12:40:59.879995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.880004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:36.613 [2024-10-07 12:40:59.880013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:36.613 [2024-10-07 12:40:59.880023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.880032] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:36.613 [2024-10-07 12:40:59.880042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:36.613 [2024-10-07 12:40:59.880051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:36.613 [2024-10-07 12:40:59.880061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:36.613 [2024-10-07 12:40:59.880071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:36.613 [2024-10-07 12:40:59.880081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:36.613 [2024-10-07 12:40:59.880090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:36.613 [2024-10-07 12:40:59.880100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:36.613 [2024-10-07 12:40:59.880108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:36.613 [2024-10-07 12:40:59.880118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:36.613 [2024-10-07 12:40:59.880129] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:36.613 [2024-10-07 12:40:59.880142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:36.613 [2024-10-07 12:40:59.880153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:36.613 [2024-10-07 12:40:59.880163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:36.613 [2024-10-07 12:40:59.880173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:36.613 [2024-10-07 12:40:59.880184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:36.613 [2024-10-07 12:40:59.880194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:36.613 [2024-10-07 12:40:59.880204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:36.613 [2024-10-07 12:40:59.880214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:36.613 [2024-10-07 12:40:59.880224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:36.613 [2024-10-07 12:40:59.880234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:36.613 [2024-10-07 12:40:59.880245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:36.613 [2024-10-07 12:40:59.880254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:36.613 [2024-10-07 12:40:59.880264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:36.613 [2024-10-07 12:40:59.880274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:36.613 [2024-10-07 12:40:59.880284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:36.613 [2024-10-07 12:40:59.880294] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:36.613 [2024-10-07 12:40:59.880305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:36.614 [2024-10-07 12:40:59.880315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:36.614 [2024-10-07 12:40:59.880325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:36.614 [2024-10-07 12:40:59.880335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:36.614 [2024-10-07 12:40:59.880348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:36.614 [2024-10-07 12:40:59.880359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.614 [2024-10-07 12:40:59.880370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:36.614 [2024-10-07 12:40:59.880384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.809 ms 00:30:36.614 [2024-10-07 12:40:59.880394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.614 [2024-10-07 12:40:59.880438] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:36.614 [2024-10-07 12:40:59.880451] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:40.810 [2024-10-07 12:41:03.725543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.725618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:40.810 [2024-10-07 12:41:03.725636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3851.345 ms 00:30:40.810 [2024-10-07 12:41:03.725655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.762258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.762314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:40.810 [2024-10-07 12:41:03.762331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.331 ms 00:30:40.810 [2024-10-07 12:41:03.762341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.762425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.762437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:40.810 [2024-10-07 12:41:03.762448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:40.810 [2024-10-07 12:41:03.762457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.836025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.836241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:40.810 [2024-10-07 12:41:03.836281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.622 ms 00:30:40.810 [2024-10-07 12:41:03.836293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.836348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.836360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:40.810 [2024-10-07 12:41:03.836371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:40.810 [2024-10-07 12:41:03.836382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.836871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.836886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:40.810 [2024-10-07 12:41:03.836897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.431 ms 00:30:40.810 [2024-10-07 12:41:03.836907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.836962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.836974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:40.810 [2024-10-07 12:41:03.836985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:40.810 [2024-10-07 12:41:03.836995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.857328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.857363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:40.810 [2024-10-07 12:41:03.857376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.343 ms 00:30:40.810 [2024-10-07 12:41:03.857386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.875439] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:40.810 [2024-10-07 12:41:03.875621] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:40.810 [2024-10-07 12:41:03.875641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.875653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:40.810 [2024-10-07 12:41:03.875664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.168 ms 00:30:40.810 [2024-10-07 12:41:03.875674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.894629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.894668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:40.810 [2024-10-07 12:41:03.894681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.924 ms 00:30:40.810 [2024-10-07 12:41:03.894692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.911576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.911615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:40.810 [2024-10-07 12:41:03.911628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.860 ms 00:30:40.810 [2024-10-07 12:41:03.911654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.928613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.928648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:40.810 [2024-10-07 12:41:03.928661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.944 ms 00:30:40.810 [2024-10-07 12:41:03.928686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:03.930103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:03.930136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:40.810 [2024-10-07 12:41:03.930149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.312 ms 00:30:40.810 [2024-10-07 12:41:03.930160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:04.012262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:04.012319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:40.810 [2024-10-07 12:41:04.012335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.207 ms 00:30:40.810 [2024-10-07 12:41:04.012346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:04.022656] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:40.810 [2024-10-07 12:41:04.023356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:04.023382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:40.810 [2024-10-07 12:41:04.023416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.979 ms 00:30:40.810 [2024-10-07 12:41:04.023427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:04.023517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:04.023530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:40.810 [2024-10-07 12:41:04.023542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:40.810 [2024-10-07 12:41:04.023552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:04.023614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:04.023626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:40.810 [2024-10-07 12:41:04.023638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:40.810 [2024-10-07 12:41:04.023652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:04.023675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:04.023687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:40.810 [2024-10-07 12:41:04.023697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:40.810 [2024-10-07 12:41:04.023707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:04.023744] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:40.810 [2024-10-07 12:41:04.023756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:04.023767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:40.810 [2024-10-07 12:41:04.023777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:40.810 [2024-10-07 12:41:04.023787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:04.057792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:04.057831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:40.810 [2024-10-07 12:41:04.057844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.035 ms 00:30:40.810 [2024-10-07 12:41:04.057872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:04.057962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.810 [2024-10-07 12:41:04.057975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:40.810 [2024-10-07 12:41:04.057986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:40.810 [2024-10-07 12:41:04.058000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.810 [2024-10-07 12:41:04.059108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4219.826 ms, result 0 00:30:40.810 [2024-10-07 12:41:04.074169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.810 [2024-10-07 12:41:04.090181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:40.810 [2024-10-07 12:41:04.098937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:41.379 12:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:41.379 12:41:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:30:41.379 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:41.379 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:41.379 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:41.379 [2024-10-07 12:41:04.562353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.379 [2024-10-07 12:41:04.562532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:41.379 [2024-10-07 12:41:04.562555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:41.379 [2024-10-07 12:41:04.562566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.379 [2024-10-07 12:41:04.562600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.379 [2024-10-07 12:41:04.562612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:41.379 [2024-10-07 12:41:04.562624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:41.379 [2024-10-07 12:41:04.562635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.379 [2024-10-07 12:41:04.562656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:41.379 [2024-10-07 12:41:04.562674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:41.379 [2024-10-07 12:41:04.562685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:41.379 [2024-10-07 12:41:04.562696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:41.379 [2024-10-07 12:41:04.562752] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.393 ms, result 0 00:30:41.379 true 00:30:41.379 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:41.639 { 00:30:41.639 "name": "ftl", 00:30:41.639 "properties": [ 00:30:41.639 { 00:30:41.639 "name": "superblock_version", 00:30:41.639 "value": 5, 00:30:41.639 "read-only": true 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "name": "base_device", 00:30:41.639 "bands": [ 00:30:41.639 { 00:30:41.639 "id": 0, 00:30:41.639 "state": "CLOSED", 00:30:41.639 "validity": 1.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 1, 00:30:41.639 "state": "CLOSED", 00:30:41.639 "validity": 1.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 2, 00:30:41.639 "state": "CLOSED", 00:30:41.639 "validity": 0.007843137254901933 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 3, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 4, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 5, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 6, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 7, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 8, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 9, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 10, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 11, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 12, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 13, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 14, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 15, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 16, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 17, 00:30:41.639 "state": "FREE", 00:30:41.639 "validity": 0.0 00:30:41.639 } 00:30:41.639 ], 00:30:41.639 "read-only": true 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "name": "cache_device", 00:30:41.639 "type": "bdev", 00:30:41.639 "chunks": [ 00:30:41.639 { 00:30:41.639 "id": 0, 00:30:41.639 "state": "INACTIVE", 00:30:41.639 "utilization": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 1, 00:30:41.639 "state": "OPEN", 00:30:41.639 "utilization": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 2, 00:30:41.639 "state": "OPEN", 00:30:41.639 "utilization": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 3, 00:30:41.639 "state": "FREE", 00:30:41.639 "utilization": 0.0 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "id": 4, 00:30:41.639 "state": "FREE", 00:30:41.639 "utilization": 0.0 00:30:41.639 } 00:30:41.639 ], 00:30:41.639 "read-only": true 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "name": "verbose_mode", 00:30:41.639 "value": true, 00:30:41.639 "unit": "", 00:30:41.639 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:41.639 }, 00:30:41.639 { 00:30:41.639 "name": "prep_upgrade_on_shutdown", 00:30:41.639 "value": false, 00:30:41.639 "unit": "", 00:30:41.639 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:41.639 } 00:30:41.639 ] 00:30:41.639 } 00:30:41.639 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:41.639 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:41.639 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:41.899 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:41.899 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:41.899 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:41.899 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:41.899 12:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:42.158 Validate MD5 checksum, iteration 1 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:42.158 12:41:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:42.158 [2024-10-07 12:41:05.283179] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:30:42.158 [2024-10-07 12:41:05.283443] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81964 ] 00:30:42.417 [2024-10-07 12:41:05.452645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.417 [2024-10-07 12:41:05.650310] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.321  [2024-10-07T12:41:08.180Z] Copying: 574/1024 [MB] (574 MBps) [2024-10-07T12:41:10.083Z] Copying: 1024/1024 [MB] (average 590 MBps) 00:30:46.792 00:30:46.792 12:41:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:46.792 12:41:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:48.169 Validate MD5 checksum, iteration 2 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=97c4711825b774d6bfd0117af3060c0c 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 97c4711825b774d6bfd0117af3060c0c != \9\7\c\4\7\1\1\8\2\5\b\7\7\4\d\6\b\f\d\0\1\1\7\a\f\3\0\6\0\c\0\c ]] 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:48.169 12:41:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:48.169 [2024-10-07 12:41:11.319841] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:30:48.169 [2024-10-07 12:41:11.320344] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82031 ] 00:30:48.427 [2024-10-07 12:41:11.487335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.427 [2024-10-07 12:41:11.661186] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.332  [2024-10-07T12:41:14.191Z] Copying: 627/1024 [MB] (627 MBps) [2024-10-07T12:41:17.481Z] Copying: 1024/1024 [MB] (average 627 MBps) 00:30:54.190 00:30:54.190 12:41:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:54.190 12:41:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2ecbd19344c2ed7ac43cc3807bac3dbc 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2ecbd19344c2ed7ac43cc3807bac3dbc != \2\e\c\b\d\1\9\3\4\4\c\2\e\d\7\a\c\4\3\c\c\3\8\0\7\b\a\c\3\d\b\c ]] 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81879 ]] 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81879 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82115 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82115 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82115 ']' 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:56.097 12:41:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:56.097 [2024-10-07 12:41:18.987145] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:30:56.097 [2024-10-07 12:41:18.987672] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82115 ] 00:30:56.097 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 81879 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:56.098 [2024-10-07 12:41:19.157081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.382 [2024-10-07 12:41:19.409252] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.329 [2024-10-07 12:41:20.498977] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:57.329 [2024-10-07 12:41:20.499037] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:57.589 [2024-10-07 12:41:20.646564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.589 [2024-10-07 12:41:20.646612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:57.589 [2024-10-07 12:41:20.646630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:57.589 [2024-10-07 12:41:20.646640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.589 [2024-10-07 12:41:20.646691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.589 [2024-10-07 12:41:20.646703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:57.590 [2024-10-07 12:41:20.646714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:57.590 [2024-10-07 12:41:20.646724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.646756] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:57.590 [2024-10-07 12:41:20.647796] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:57.590 [2024-10-07 12:41:20.647826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.590 [2024-10-07 12:41:20.647838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:57.590 [2024-10-07 12:41:20.647850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.086 ms 00:30:57.590 [2024-10-07 12:41:20.647865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.648286] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:57.590 [2024-10-07 12:41:20.674692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.590 [2024-10-07 12:41:20.674728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:57.590 [2024-10-07 12:41:20.674749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.448 ms 00:30:57.590 [2024-10-07 12:41:20.674759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.688805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.590 [2024-10-07 12:41:20.688993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:57.590 [2024-10-07 12:41:20.689015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:57.590 [2024-10-07 12:41:20.689026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.689555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.590 [2024-10-07 12:41:20.689576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:57.590 [2024-10-07 12:41:20.689588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.428 ms 00:30:57.590 [2024-10-07 12:41:20.689599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.689663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.590 [2024-10-07 12:41:20.689677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:57.590 [2024-10-07 12:41:20.689688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:30:57.590 [2024-10-07 12:41:20.689698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.689733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.590 [2024-10-07 12:41:20.689745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:57.590 [2024-10-07 12:41:20.689761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:57.590 [2024-10-07 12:41:20.689771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.689799] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:57.590 [2024-10-07 12:41:20.693885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.590 [2024-10-07 12:41:20.693924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:57.590 [2024-10-07 12:41:20.693937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.099 ms 00:30:57.590 [2024-10-07 12:41:20.693947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.693976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.590 [2024-10-07 12:41:20.693987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:57.590 [2024-10-07 12:41:20.693998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:57.590 [2024-10-07 12:41:20.694008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.694045] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:57.590 [2024-10-07 12:41:20.694071] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:57.590 [2024-10-07 12:41:20.694114] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:57.590 [2024-10-07 12:41:20.694132] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:57.590 [2024-10-07 12:41:20.694223] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:57.590 [2024-10-07 12:41:20.694253] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:57.590 [2024-10-07 12:41:20.694268] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:57.590 [2024-10-07 12:41:20.694281] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:57.590 [2024-10-07 12:41:20.694293] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:57.590 [2024-10-07 12:41:20.694304] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:57.590 [2024-10-07 12:41:20.694319] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:57.590 [2024-10-07 12:41:20.694329] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:57.590 [2024-10-07 12:41:20.694339] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:57.590 [2024-10-07 12:41:20.694349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.590 [2024-10-07 12:41:20.694360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:57.590 [2024-10-07 12:41:20.694371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:30:57.590 [2024-10-07 12:41:20.694382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.694455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.590 [2024-10-07 12:41:20.694466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:57.590 [2024-10-07 12:41:20.694476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:57.590 [2024-10-07 12:41:20.694490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.590 [2024-10-07 12:41:20.694580] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:57.590 [2024-10-07 12:41:20.694599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:57.590 [2024-10-07 12:41:20.694612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:57.590 [2024-10-07 12:41:20.694623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:57.590 [2024-10-07 12:41:20.694634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:57.590 [2024-10-07 12:41:20.694643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:57.590 [2024-10-07 12:41:20.694653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:57.590 [2024-10-07 12:41:20.694664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:57.590 [2024-10-07 12:41:20.694673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:57.590 [2024-10-07 12:41:20.694684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:57.590 [2024-10-07 12:41:20.694694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:57.590 [2024-10-07 12:41:20.694704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:57.590 [2024-10-07 12:41:20.694713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:57.590 [2024-10-07 12:41:20.694723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:57.590 [2024-10-07 12:41:20.694732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:57.590 [2024-10-07 12:41:20.694741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:57.590 [2024-10-07 12:41:20.694750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:57.590 [2024-10-07 12:41:20.694759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:57.590 [2024-10-07 12:41:20.694768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:57.590 [2024-10-07 12:41:20.694778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:57.590 [2024-10-07 12:41:20.694787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:57.590 [2024-10-07 12:41:20.694796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:57.590 [2024-10-07 12:41:20.694816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:57.590 [2024-10-07 12:41:20.694825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:57.590 [2024-10-07 12:41:20.694834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:57.590 [2024-10-07 12:41:20.694844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:57.590 [2024-10-07 12:41:20.694853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:57.590 [2024-10-07 12:41:20.694863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:57.590 [2024-10-07 12:41:20.694872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:57.590 [2024-10-07 12:41:20.694882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:57.590 [2024-10-07 12:41:20.694892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:57.590 [2024-10-07 12:41:20.694902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:57.590 [2024-10-07 12:41:20.694912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:57.590 [2024-10-07 12:41:20.694937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:57.590 [2024-10-07 12:41:20.694954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:57.590 [2024-10-07 12:41:20.694965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:57.590 [2024-10-07 12:41:20.694974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:57.590 [2024-10-07 12:41:20.694985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:57.590 [2024-10-07 12:41:20.694995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:57.591 [2024-10-07 12:41:20.695005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:57.591 [2024-10-07 12:41:20.695015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:57.591 [2024-10-07 12:41:20.695026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:57.591 [2024-10-07 12:41:20.695036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:57.591 [2024-10-07 12:41:20.695046] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:57.591 [2024-10-07 12:41:20.695057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:57.591 [2024-10-07 12:41:20.695067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:57.591 [2024-10-07 12:41:20.695076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:57.591 [2024-10-07 12:41:20.695092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:57.591 [2024-10-07 12:41:20.695102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:57.591 [2024-10-07 12:41:20.695112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:57.591 [2024-10-07 12:41:20.695122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:57.591 [2024-10-07 12:41:20.695132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:57.591 [2024-10-07 12:41:20.695142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:57.591 [2024-10-07 12:41:20.695153] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:57.591 [2024-10-07 12:41:20.695165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:57.591 [2024-10-07 12:41:20.695189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:57.591 [2024-10-07 12:41:20.695221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:57.591 [2024-10-07 12:41:20.695231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:57.591 [2024-10-07 12:41:20.695243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:57.591 [2024-10-07 12:41:20.695253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:57.591 [2024-10-07 12:41:20.695327] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:57.591 [2024-10-07 12:41:20.695338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:57.591 [2024-10-07 12:41:20.695360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:57.591 [2024-10-07 12:41:20.695370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:57.591 [2024-10-07 12:41:20.695381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:57.591 [2024-10-07 12:41:20.695392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.695402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:57.591 [2024-10-07 12:41:20.695412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.868 ms 00:30:57.591 [2024-10-07 12:41:20.695423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.591 [2024-10-07 12:41:20.738196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.738231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:57.591 [2024-10-07 12:41:20.738246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.787 ms 00:30:57.591 [2024-10-07 12:41:20.738257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.591 [2024-10-07 12:41:20.738302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.738313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:57.591 [2024-10-07 12:41:20.738328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:57.591 [2024-10-07 12:41:20.738339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.591 [2024-10-07 12:41:20.799884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.799930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:57.591 [2024-10-07 12:41:20.799946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 61.552 ms 00:30:57.591 [2024-10-07 12:41:20.799957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.591 [2024-10-07 12:41:20.800010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.800022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:57.591 [2024-10-07 12:41:20.800034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:57.591 [2024-10-07 12:41:20.800067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.591 [2024-10-07 12:41:20.800219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.800233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:57.591 [2024-10-07 12:41:20.800246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:30:57.591 [2024-10-07 12:41:20.800256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.591 [2024-10-07 12:41:20.800306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.800318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:57.591 [2024-10-07 12:41:20.800328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:57.591 [2024-10-07 12:41:20.800339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.591 [2024-10-07 12:41:20.824592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.824622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:57.591 [2024-10-07 12:41:20.824636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.268 ms 00:30:57.591 [2024-10-07 12:41:20.824646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.591 [2024-10-07 12:41:20.824779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.824795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:57.591 [2024-10-07 12:41:20.824806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:57.591 [2024-10-07 12:41:20.824816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.591 [2024-10-07 12:41:20.850379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.850415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:57.591 [2024-10-07 12:41:20.850430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.575 ms 00:30:57.591 [2024-10-07 12:41:20.850446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.591 [2024-10-07 12:41:20.864270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.591 [2024-10-07 12:41:20.864304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:57.591 [2024-10-07 12:41:20.864317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.683 ms 00:30:57.591 [2024-10-07 12:41:20.864328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.851 [2024-10-07 12:41:20.954914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.851 [2024-10-07 12:41:20.955141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:57.851 [2024-10-07 12:41:20.955168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 90.650 ms 00:30:57.851 [2024-10-07 12:41:20.955180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.851 [2024-10-07 12:41:20.955470] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:57.851 [2024-10-07 12:41:20.955650] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:57.851 [2024-10-07 12:41:20.955830] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:57.851 [2024-10-07 12:41:20.956009] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:57.851 [2024-10-07 12:41:20.956026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.851 [2024-10-07 12:41:20.956038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:57.851 [2024-10-07 12:41:20.956056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.745 ms 00:30:57.851 [2024-10-07 12:41:20.956066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.851 [2024-10-07 12:41:20.956133] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:57.851 [2024-10-07 12:41:20.956149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.851 [2024-10-07 12:41:20.956160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:57.851 [2024-10-07 12:41:20.956172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:57.851 [2024-10-07 12:41:20.956183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.851 [2024-10-07 12:41:20.977423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.851 [2024-10-07 12:41:20.977460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:57.851 [2024-10-07 12:41:20.977474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.248 ms 00:30:57.851 [2024-10-07 12:41:20.977485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.851 [2024-10-07 12:41:20.990496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.851 [2024-10-07 12:41:20.990530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:57.851 [2024-10-07 12:41:20.990543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:57.851 [2024-10-07 12:41:20.990558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:57.851 [2024-10-07 12:41:20.990648] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:57.851 [2024-10-07 12:41:20.991015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:57.851 [2024-10-07 12:41:20.991030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:57.851 [2024-10-07 12:41:20.991043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.369 ms 00:30:57.851 [2024-10-07 12:41:20.991054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.419 [2024-10-07 12:41:21.607590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.419 [2024-10-07 12:41:21.607658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:58.419 [2024-10-07 12:41:21.607676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 616.421 ms 00:30:58.419 [2024-10-07 12:41:21.607688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.419 [2024-10-07 12:41:21.613677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.419 [2024-10-07 12:41:21.613913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:58.419 [2024-10-07 12:41:21.613938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.301 ms 00:30:58.419 [2024-10-07 12:41:21.613950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.419 [2024-10-07 12:41:21.614576] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:58.419 [2024-10-07 12:41:21.614607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.419 [2024-10-07 12:41:21.614621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:58.419 [2024-10-07 12:41:21.614634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.620 ms 00:30:58.419 [2024-10-07 12:41:21.614646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.419 [2024-10-07 12:41:21.614699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.419 [2024-10-07 12:41:21.614712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:58.419 [2024-10-07 12:41:21.614724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:58.419 [2024-10-07 12:41:21.614734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.419 [2024-10-07 12:41:21.614769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 625.137 ms, result 0 00:30:58.419 [2024-10-07 12:41:21.614815] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:58.419 [2024-10-07 12:41:21.615001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.419 [2024-10-07 12:41:21.615014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:58.419 [2024-10-07 12:41:21.615025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:30:58.419 [2024-10-07 12:41:21.615035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.237650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.237721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:58.988 [2024-10-07 12:41:22.237741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 622.507 ms 00:30:58.988 [2024-10-07 12:41:22.237752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.243485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.243525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:58.988 [2024-10-07 12:41:22.243538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.155 ms 00:30:58.988 [2024-10-07 12:41:22.243548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.243992] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:58.988 [2024-10-07 12:41:22.244040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.244051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:58.988 [2024-10-07 12:41:22.244062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.463 ms 00:30:58.988 [2024-10-07 12:41:22.244072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.244106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.244117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:58.988 [2024-10-07 12:41:22.244128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:58.988 [2024-10-07 12:41:22.244138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.244174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 630.381 ms, result 0 00:30:58.988 [2024-10-07 12:41:22.244224] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:58.988 [2024-10-07 12:41:22.244237] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:58.988 [2024-10-07 12:41:22.244250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.244262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:58.988 [2024-10-07 12:41:22.244277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1255.661 ms 00:30:58.988 [2024-10-07 12:41:22.244287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.244316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.244328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:58.988 [2024-10-07 12:41:22.244339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:58.988 [2024-10-07 12:41:22.244350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.256681] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:58.988 [2024-10-07 12:41:22.256982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.257034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:58.988 [2024-10-07 12:41:22.257122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.634 ms 00:30:58.988 [2024-10-07 12:41:22.257157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.257772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.257896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:58.988 [2024-10-07 12:41:22.258014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.512 ms 00:30:58.988 [2024-10-07 12:41:22.258051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.260070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.260192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:58.988 [2024-10-07 12:41:22.260337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.975 ms 00:30:58.988 [2024-10-07 12:41:22.260375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.260449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.260623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:58.988 [2024-10-07 12:41:22.260661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:58.988 [2024-10-07 12:41:22.260690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.260821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.988 [2024-10-07 12:41:22.260990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:58.988 [2024-10-07 12:41:22.261029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:58.988 [2024-10-07 12:41:22.261060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.988 [2024-10-07 12:41:22.261116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.989 [2024-10-07 12:41:22.261295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:58.989 [2024-10-07 12:41:22.261330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:58.989 [2024-10-07 12:41:22.261359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.989 [2024-10-07 12:41:22.261423] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:58.989 [2024-10-07 12:41:22.261441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.989 [2024-10-07 12:41:22.261451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:58.989 [2024-10-07 12:41:22.261462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:30:58.989 [2024-10-07 12:41:22.261472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.989 [2024-10-07 12:41:22.261549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.989 [2024-10-07 12:41:22.261567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:58.989 [2024-10-07 12:41:22.261582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:30:58.989 [2024-10-07 12:41:22.261592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.989 [2024-10-07 12:41:22.263023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1618.469 ms, result 0 00:30:58.989 [2024-10-07 12:41:22.277671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.248 [2024-10-07 12:41:22.293628] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:59.248 [2024-10-07 12:41:22.304140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:59.248 Validate MD5 checksum, iteration 1 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:59.248 12:41:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:59.248 [2024-10-07 12:41:22.455058] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:30:59.248 [2024-10-07 12:41:22.455350] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82157 ] 00:30:59.507 [2024-10-07 12:41:22.627616] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.767 [2024-10-07 12:41:22.812308] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.146  [2024-10-07T12:41:25.374Z] Copying: 623/1024 [MB] (623 MBps) [2024-10-07T12:41:27.911Z] Copying: 1024/1024 [MB] (average 616 MBps) 00:31:04.620 00:31:04.620 12:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:04.620 12:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:06.528 Validate MD5 checksum, iteration 2 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=97c4711825b774d6bfd0117af3060c0c 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 97c4711825b774d6bfd0117af3060c0c != \9\7\c\4\7\1\1\8\2\5\b\7\7\4\d\6\b\f\d\0\1\1\7\a\f\3\0\6\0\c\0\c ]] 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:06.528 12:41:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:06.528 [2024-10-07 12:41:29.399979] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:31:06.528 [2024-10-07 12:41:29.400255] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82234 ] 00:31:06.528 [2024-10-07 12:41:29.568734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.528 [2024-10-07 12:41:29.750651] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.434  [2024-10-07T12:41:32.293Z] Copying: 617/1024 [MB] (617 MBps) [2024-10-07T12:41:33.672Z] Copying: 1024/1024 [MB] (average 623 MBps) 00:31:10.381 00:31:10.381 12:41:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:10.381 12:41:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2ecbd19344c2ed7ac43cc3807bac3dbc 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2ecbd19344c2ed7ac43cc3807bac3dbc != \2\e\c\b\d\1\9\3\4\4\c\2\e\d\7\a\c\4\3\c\c\3\8\0\7\b\a\c\3\d\b\c ]] 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82115 ]] 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82115 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82115 ']' 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 82115 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82115 00:31:12.288 killing process with pid 82115 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82115' 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 82115 00:31:12.288 12:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 82115 00:31:13.227 [2024-10-07 12:41:36.461645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:13.227 [2024-10-07 12:41:36.480420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.227 [2024-10-07 12:41:36.480461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:13.227 [2024-10-07 12:41:36.480478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:13.227 [2024-10-07 12:41:36.480492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.227 [2024-10-07 12:41:36.480518] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:13.227 [2024-10-07 12:41:36.485208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.227 [2024-10-07 12:41:36.485239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:13.227 [2024-10-07 12:41:36.485251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.680 ms 00:31:13.227 [2024-10-07 12:41:36.485262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.227 [2024-10-07 12:41:36.485479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.227 [2024-10-07 12:41:36.485493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:13.227 [2024-10-07 12:41:36.485508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:31:13.227 [2024-10-07 12:41:36.485518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.227 [2024-10-07 12:41:36.486860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.227 [2024-10-07 12:41:36.486912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:13.227 [2024-10-07 12:41:36.486924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.325 ms 00:31:13.227 [2024-10-07 12:41:36.486935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.227 [2024-10-07 12:41:36.487823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.227 [2024-10-07 12:41:36.487854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:13.227 [2024-10-07 12:41:36.487865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:31:13.227 [2024-10-07 12:41:36.487876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.227 [2024-10-07 12:41:36.502229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.227 [2024-10-07 12:41:36.502265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:13.227 [2024-10-07 12:41:36.502278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.299 ms 00:31:13.227 [2024-10-07 12:41:36.502289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.227 [2024-10-07 12:41:36.509938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.227 [2024-10-07 12:41:36.509975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:13.227 [2024-10-07 12:41:36.509987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.622 ms 00:31:13.227 [2024-10-07 12:41:36.509997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.227 [2024-10-07 12:41:36.510082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.227 [2024-10-07 12:41:36.510094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:13.227 [2024-10-07 12:41:36.510106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:13.227 [2024-10-07 12:41:36.510116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.487 [2024-10-07 12:41:36.525028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.487 [2024-10-07 12:41:36.525060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:13.487 [2024-10-07 12:41:36.525071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.918 ms 00:31:13.487 [2024-10-07 12:41:36.525081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.487 [2024-10-07 12:41:36.539861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.487 [2024-10-07 12:41:36.539895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:13.487 [2024-10-07 12:41:36.539916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.769 ms 00:31:13.487 [2024-10-07 12:41:36.539925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.487 [2024-10-07 12:41:36.554224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.487 [2024-10-07 12:41:36.554381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:13.487 [2024-10-07 12:41:36.554402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.288 ms 00:31:13.487 [2024-10-07 12:41:36.554412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.487 [2024-10-07 12:41:36.568967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.487 [2024-10-07 12:41:36.569120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:13.487 [2024-10-07 12:41:36.569139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.486 ms 00:31:13.487 [2024-10-07 12:41:36.569151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.487 [2024-10-07 12:41:36.569227] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:13.487 [2024-10-07 12:41:36.569244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:13.487 [2024-10-07 12:41:36.569258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:13.487 [2024-10-07 12:41:36.569270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:13.487 [2024-10-07 12:41:36.569282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:13.487 [2024-10-07 12:41:36.569445] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:13.487 [2024-10-07 12:41:36.569455] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5676c5d0-ea0a-43d4-8ab1-2a51fc8a962c 00:31:13.487 [2024-10-07 12:41:36.569466] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:13.487 [2024-10-07 12:41:36.569476] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:13.487 [2024-10-07 12:41:36.569486] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:13.487 [2024-10-07 12:41:36.569503] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:13.487 [2024-10-07 12:41:36.569513] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:13.487 [2024-10-07 12:41:36.569524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:13.487 [2024-10-07 12:41:36.569534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:13.487 [2024-10-07 12:41:36.569544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:13.487 [2024-10-07 12:41:36.569553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:13.487 [2024-10-07 12:41:36.569563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.487 [2024-10-07 12:41:36.569577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:13.487 [2024-10-07 12:41:36.569588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:31:13.487 [2024-10-07 12:41:36.569601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.487 [2024-10-07 12:41:36.590833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.487 [2024-10-07 12:41:36.590872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:13.487 [2024-10-07 12:41:36.590885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.235 ms 00:31:13.487 [2024-10-07 12:41:36.590896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.487 [2024-10-07 12:41:36.591540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.487 [2024-10-07 12:41:36.591667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:13.487 [2024-10-07 12:41:36.591686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.613 ms 00:31:13.487 [2024-10-07 12:41:36.591699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.488 [2024-10-07 12:41:36.653529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.488 [2024-10-07 12:41:36.653666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:13.488 [2024-10-07 12:41:36.653687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.488 [2024-10-07 12:41:36.653699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.488 [2024-10-07 12:41:36.653737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.488 [2024-10-07 12:41:36.653751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:13.488 [2024-10-07 12:41:36.653762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.488 [2024-10-07 12:41:36.653773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.488 [2024-10-07 12:41:36.653856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.488 [2024-10-07 12:41:36.653871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:13.488 [2024-10-07 12:41:36.653889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.488 [2024-10-07 12:41:36.653921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.488 [2024-10-07 12:41:36.653943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.488 [2024-10-07 12:41:36.653954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:13.488 [2024-10-07 12:41:36.653965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.488 [2024-10-07 12:41:36.653975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.747 [2024-10-07 12:41:36.787766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.748 [2024-10-07 12:41:36.787960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:13.748 [2024-10-07 12:41:36.787986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.748 [2024-10-07 12:41:36.787998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.748 [2024-10-07 12:41:36.888475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.748 [2024-10-07 12:41:36.888528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:13.748 [2024-10-07 12:41:36.888545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.748 [2024-10-07 12:41:36.888556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.748 [2024-10-07 12:41:36.888693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.748 [2024-10-07 12:41:36.888706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:13.748 [2024-10-07 12:41:36.888718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.748 [2024-10-07 12:41:36.888735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.748 [2024-10-07 12:41:36.888790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.748 [2024-10-07 12:41:36.888807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:13.748 [2024-10-07 12:41:36.888818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.748 [2024-10-07 12:41:36.888829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.748 [2024-10-07 12:41:36.889130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.748 [2024-10-07 12:41:36.889182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:13.748 [2024-10-07 12:41:36.889214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.748 [2024-10-07 12:41:36.889252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.748 [2024-10-07 12:41:36.889347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.748 [2024-10-07 12:41:36.889442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:13.748 [2024-10-07 12:41:36.889479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.748 [2024-10-07 12:41:36.889509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.748 [2024-10-07 12:41:36.889587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.748 [2024-10-07 12:41:36.889620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:13.748 [2024-10-07 12:41:36.889687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.748 [2024-10-07 12:41:36.889771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.748 [2024-10-07 12:41:36.889850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:13.748 [2024-10-07 12:41:36.889863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:13.748 [2024-10-07 12:41:36.889875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:13.748 [2024-10-07 12:41:36.889887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.748 [2024-10-07 12:41:36.890052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 410.248 ms, result 0 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:15.128 Remove shared memory files 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81879 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:15.128 ************************************ 00:31:15.128 END TEST ftl_upgrade_shutdown 00:31:15.128 ************************************ 00:31:15.128 00:31:15.128 real 1m32.621s 00:31:15.128 user 2m3.544s 00:31:15.128 sys 0m24.772s 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:15.128 12:41:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:15.128 12:41:38 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:15.128 12:41:38 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:15.128 12:41:38 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:31:15.128 12:41:38 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:15.128 12:41:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:15.128 ************************************ 00:31:15.128 START TEST ftl_restore_fast 00:31:15.128 ************************************ 00:31:15.128 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:15.387 * Looking for test storage... 00:31:15.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.387 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:15.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.388 --rc genhtml_branch_coverage=1 00:31:15.388 --rc genhtml_function_coverage=1 00:31:15.388 --rc genhtml_legend=1 00:31:15.388 --rc geninfo_all_blocks=1 00:31:15.388 --rc geninfo_unexecuted_blocks=1 00:31:15.388 00:31:15.388 ' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:15.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.388 --rc genhtml_branch_coverage=1 00:31:15.388 --rc genhtml_function_coverage=1 00:31:15.388 --rc genhtml_legend=1 00:31:15.388 --rc geninfo_all_blocks=1 00:31:15.388 --rc geninfo_unexecuted_blocks=1 00:31:15.388 00:31:15.388 ' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:15.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.388 --rc genhtml_branch_coverage=1 00:31:15.388 --rc genhtml_function_coverage=1 00:31:15.388 --rc genhtml_legend=1 00:31:15.388 --rc geninfo_all_blocks=1 00:31:15.388 --rc geninfo_unexecuted_blocks=1 00:31:15.388 00:31:15.388 ' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:15.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.388 --rc genhtml_branch_coverage=1 00:31:15.388 --rc genhtml_function_coverage=1 00:31:15.388 --rc genhtml_legend=1 00:31:15.388 --rc geninfo_all_blocks=1 00:31:15.388 --rc geninfo_unexecuted_blocks=1 00:31:15.388 00:31:15.388 ' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.6yeXawWbm4 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=82405 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 82405 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 82405 ']' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:15.388 12:41:38 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:15.647 [2024-10-07 12:41:38.747512] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:31:15.647 [2024-10-07 12:41:38.747633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82405 ] 00:31:15.647 [2024-10-07 12:41:38.916304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.906 [2024-10-07 12:41:39.117885] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.861 12:41:39 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:16.861 12:41:39 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:31:16.861 12:41:39 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:16.861 12:41:39 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:16.861 12:41:39 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:16.861 12:41:39 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:16.861 12:41:39 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:16.861 12:41:39 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:17.145 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:17.145 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:17.146 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:17.146 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:31:17.146 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:17.146 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:17.146 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:17.146 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:17.405 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:17.405 { 00:31:17.405 "name": "nvme0n1", 00:31:17.405 "aliases": [ 00:31:17.405 "7458ecbc-1726-4de1-8cf7-09259cdb96a6" 00:31:17.405 ], 00:31:17.405 "product_name": "NVMe disk", 00:31:17.405 "block_size": 4096, 00:31:17.405 "num_blocks": 1310720, 00:31:17.405 "uuid": "7458ecbc-1726-4de1-8cf7-09259cdb96a6", 00:31:17.405 "numa_id": -1, 00:31:17.405 "assigned_rate_limits": { 00:31:17.405 "rw_ios_per_sec": 0, 00:31:17.405 "rw_mbytes_per_sec": 0, 00:31:17.405 "r_mbytes_per_sec": 0, 00:31:17.405 "w_mbytes_per_sec": 0 00:31:17.405 }, 00:31:17.405 "claimed": true, 00:31:17.405 "claim_type": "read_many_write_one", 00:31:17.405 "zoned": false, 00:31:17.405 "supported_io_types": { 00:31:17.405 "read": true, 00:31:17.405 "write": true, 00:31:17.405 "unmap": true, 00:31:17.405 "flush": true, 00:31:17.405 "reset": true, 00:31:17.405 "nvme_admin": true, 00:31:17.405 "nvme_io": true, 00:31:17.405 "nvme_io_md": false, 00:31:17.405 "write_zeroes": true, 00:31:17.406 "zcopy": false, 00:31:17.406 "get_zone_info": false, 00:31:17.406 "zone_management": false, 00:31:17.406 "zone_append": false, 00:31:17.406 "compare": true, 00:31:17.406 "compare_and_write": false, 00:31:17.406 "abort": true, 00:31:17.406 "seek_hole": false, 00:31:17.406 "seek_data": false, 00:31:17.406 "copy": true, 00:31:17.406 "nvme_iov_md": false 00:31:17.406 }, 00:31:17.406 "driver_specific": { 00:31:17.406 "nvme": [ 00:31:17.406 { 00:31:17.406 "pci_address": "0000:00:11.0", 00:31:17.406 "trid": { 00:31:17.406 "trtype": "PCIe", 00:31:17.406 "traddr": "0000:00:11.0" 00:31:17.406 }, 00:31:17.406 "ctrlr_data": { 00:31:17.406 "cntlid": 0, 00:31:17.406 "vendor_id": "0x1b36", 00:31:17.406 "model_number": "QEMU NVMe Ctrl", 00:31:17.406 "serial_number": "12341", 00:31:17.406 "firmware_revision": "8.0.0", 00:31:17.406 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:17.406 "oacs": { 00:31:17.406 "security": 0, 00:31:17.406 "format": 1, 00:31:17.406 "firmware": 0, 00:31:17.406 "ns_manage": 1 00:31:17.406 }, 00:31:17.406 "multi_ctrlr": false, 00:31:17.406 "ana_reporting": false 00:31:17.406 }, 00:31:17.406 "vs": { 00:31:17.406 "nvme_version": "1.4" 00:31:17.406 }, 00:31:17.406 "ns_data": { 00:31:17.406 "id": 1, 00:31:17.406 "can_share": false 00:31:17.406 } 00:31:17.406 } 00:31:17.406 ], 00:31:17.406 "mp_policy": "active_passive" 00:31:17.406 } 00:31:17.406 } 00:31:17.406 ]' 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:17.406 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:17.665 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=1e7e79bf-99f0-4bd5-bba6-de2d8da78248 00:31:17.665 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:17.665 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e7e79bf-99f0-4bd5-bba6-de2d8da78248 00:31:17.665 12:41:40 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:17.924 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=9a887006-011c-46cc-b372-84ab6575a018 00:31:17.924 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9a887006-011c-46cc-b372-84ab6575a018 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:18.184 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:18.444 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:18.444 { 00:31:18.444 "name": "9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6", 00:31:18.444 "aliases": [ 00:31:18.444 "lvs/nvme0n1p0" 00:31:18.444 ], 00:31:18.444 "product_name": "Logical Volume", 00:31:18.444 "block_size": 4096, 00:31:18.444 "num_blocks": 26476544, 00:31:18.444 "uuid": "9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6", 00:31:18.444 "assigned_rate_limits": { 00:31:18.444 "rw_ios_per_sec": 0, 00:31:18.444 "rw_mbytes_per_sec": 0, 00:31:18.444 "r_mbytes_per_sec": 0, 00:31:18.444 "w_mbytes_per_sec": 0 00:31:18.444 }, 00:31:18.444 "claimed": false, 00:31:18.444 "zoned": false, 00:31:18.444 "supported_io_types": { 00:31:18.444 "read": true, 00:31:18.444 "write": true, 00:31:18.444 "unmap": true, 00:31:18.444 "flush": false, 00:31:18.444 "reset": true, 00:31:18.444 "nvme_admin": false, 00:31:18.444 "nvme_io": false, 00:31:18.444 "nvme_io_md": false, 00:31:18.444 "write_zeroes": true, 00:31:18.444 "zcopy": false, 00:31:18.444 "get_zone_info": false, 00:31:18.444 "zone_management": false, 00:31:18.444 "zone_append": false, 00:31:18.444 "compare": false, 00:31:18.444 "compare_and_write": false, 00:31:18.444 "abort": false, 00:31:18.444 "seek_hole": true, 00:31:18.444 "seek_data": true, 00:31:18.444 "copy": false, 00:31:18.444 "nvme_iov_md": false 00:31:18.444 }, 00:31:18.444 "driver_specific": { 00:31:18.444 "lvol": { 00:31:18.444 "lvol_store_uuid": "9a887006-011c-46cc-b372-84ab6575a018", 00:31:18.444 "base_bdev": "nvme0n1", 00:31:18.444 "thin_provision": true, 00:31:18.444 "num_allocated_clusters": 0, 00:31:18.444 "snapshot": false, 00:31:18.444 "clone": false, 00:31:18.444 "esnap_clone": false 00:31:18.444 } 00:31:18.444 } 00:31:18.444 } 00:31:18.444 ]' 00:31:18.444 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:18.444 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:18.444 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:18.444 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:18.444 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:18.444 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:31:18.444 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:18.444 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:18.444 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:18.704 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:18.704 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:18.704 12:41:41 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:18.704 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:18.704 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:18.704 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:18.704 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:18.704 12:41:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:18.963 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:18.963 { 00:31:18.963 "name": "9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6", 00:31:18.963 "aliases": [ 00:31:18.963 "lvs/nvme0n1p0" 00:31:18.963 ], 00:31:18.963 "product_name": "Logical Volume", 00:31:18.963 "block_size": 4096, 00:31:18.963 "num_blocks": 26476544, 00:31:18.963 "uuid": "9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6", 00:31:18.963 "assigned_rate_limits": { 00:31:18.963 "rw_ios_per_sec": 0, 00:31:18.963 "rw_mbytes_per_sec": 0, 00:31:18.963 "r_mbytes_per_sec": 0, 00:31:18.963 "w_mbytes_per_sec": 0 00:31:18.963 }, 00:31:18.963 "claimed": false, 00:31:18.963 "zoned": false, 00:31:18.963 "supported_io_types": { 00:31:18.963 "read": true, 00:31:18.963 "write": true, 00:31:18.963 "unmap": true, 00:31:18.963 "flush": false, 00:31:18.963 "reset": true, 00:31:18.963 "nvme_admin": false, 00:31:18.963 "nvme_io": false, 00:31:18.963 "nvme_io_md": false, 00:31:18.963 "write_zeroes": true, 00:31:18.963 "zcopy": false, 00:31:18.963 "get_zone_info": false, 00:31:18.963 "zone_management": false, 00:31:18.963 "zone_append": false, 00:31:18.963 "compare": false, 00:31:18.963 "compare_and_write": false, 00:31:18.963 "abort": false, 00:31:18.963 "seek_hole": true, 00:31:18.963 "seek_data": true, 00:31:18.963 "copy": false, 00:31:18.963 "nvme_iov_md": false 00:31:18.963 }, 00:31:18.963 "driver_specific": { 00:31:18.963 "lvol": { 00:31:18.963 "lvol_store_uuid": "9a887006-011c-46cc-b372-84ab6575a018", 00:31:18.963 "base_bdev": "nvme0n1", 00:31:18.963 "thin_provision": true, 00:31:18.963 "num_allocated_clusters": 0, 00:31:18.963 "snapshot": false, 00:31:18.963 "clone": false, 00:31:18.963 "esnap_clone": false 00:31:18.963 } 00:31:18.963 } 00:31:18.963 } 00:31:18.963 ]' 00:31:18.963 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:18.963 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:18.963 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:18.963 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:18.963 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:18.963 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:31:18.963 12:41:42 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:18.963 12:41:42 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:19.223 12:41:42 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:19.223 12:41:42 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:19.223 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:19.223 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:19.223 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:19.223 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:19.223 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 00:31:19.482 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:19.482 { 00:31:19.482 "name": "9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6", 00:31:19.482 "aliases": [ 00:31:19.482 "lvs/nvme0n1p0" 00:31:19.482 ], 00:31:19.482 "product_name": "Logical Volume", 00:31:19.482 "block_size": 4096, 00:31:19.482 "num_blocks": 26476544, 00:31:19.482 "uuid": "9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6", 00:31:19.482 "assigned_rate_limits": { 00:31:19.482 "rw_ios_per_sec": 0, 00:31:19.482 "rw_mbytes_per_sec": 0, 00:31:19.482 "r_mbytes_per_sec": 0, 00:31:19.482 "w_mbytes_per_sec": 0 00:31:19.482 }, 00:31:19.482 "claimed": false, 00:31:19.482 "zoned": false, 00:31:19.482 "supported_io_types": { 00:31:19.482 "read": true, 00:31:19.482 "write": true, 00:31:19.482 "unmap": true, 00:31:19.482 "flush": false, 00:31:19.482 "reset": true, 00:31:19.482 "nvme_admin": false, 00:31:19.482 "nvme_io": false, 00:31:19.482 "nvme_io_md": false, 00:31:19.482 "write_zeroes": true, 00:31:19.482 "zcopy": false, 00:31:19.482 "get_zone_info": false, 00:31:19.482 "zone_management": false, 00:31:19.482 "zone_append": false, 00:31:19.482 "compare": false, 00:31:19.482 "compare_and_write": false, 00:31:19.482 "abort": false, 00:31:19.482 "seek_hole": true, 00:31:19.482 "seek_data": true, 00:31:19.482 "copy": false, 00:31:19.482 "nvme_iov_md": false 00:31:19.482 }, 00:31:19.482 "driver_specific": { 00:31:19.482 "lvol": { 00:31:19.482 "lvol_store_uuid": "9a887006-011c-46cc-b372-84ab6575a018", 00:31:19.482 "base_bdev": "nvme0n1", 00:31:19.482 "thin_provision": true, 00:31:19.482 "num_allocated_clusters": 0, 00:31:19.482 "snapshot": false, 00:31:19.482 "clone": false, 00:31:19.482 "esnap_clone": false 00:31:19.482 } 00:31:19.482 } 00:31:19.482 } 00:31:19.482 ]' 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 --l2p_dram_limit 10' 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:19.483 12:41:42 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9f82c56a-c2ef-4a77-aca7-8efd9dbf36f6 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:19.743 [2024-10-07 12:41:42.873794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-07 12:41:42.873996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:19.743 [2024-10-07 12:41:42.874120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:19.743 [2024-10-07 12:41:42.874207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-07 12:41:42.874316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-07 12:41:42.874396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:19.743 [2024-10-07 12:41:42.874480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:19.743 [2024-10-07 12:41:42.874517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-07 12:41:42.874617] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:19.743 [2024-10-07 12:41:42.875629] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:19.743 [2024-10-07 12:41:42.875783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-07 12:41:42.875855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:19.743 [2024-10-07 12:41:42.875895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.180 ms 00:31:19.743 [2024-10-07 12:41:42.875987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-07 12:41:42.876165] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dbbdbf03-3353-47c3-8ed4-2dbb0b74638e 00:31:19.743 [2024-10-07 12:41:42.877708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-07 12:41:42.877852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:19.743 [2024-10-07 12:41:42.877936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:19.743 [2024-10-07 12:41:42.877979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-07 12:41:42.885708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-07 12:41:42.885881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:19.744 [2024-10-07 12:41:42.885975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.587 ms 00:31:19.744 [2024-10-07 12:41:42.886018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-07 12:41:42.886142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-07 12:41:42.886182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:19.744 [2024-10-07 12:41:42.886268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:19.744 [2024-10-07 12:41:42.886316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-07 12:41:42.886397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-07 12:41:42.886437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:19.744 [2024-10-07 12:41:42.886469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:19.744 [2024-10-07 12:41:42.886557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-07 12:41:42.886612] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:19.744 [2024-10-07 12:41:42.891699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-07 12:41:42.891847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:19.744 [2024-10-07 12:41:42.892013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.100 ms 00:31:19.744 [2024-10-07 12:41:42.892052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-07 12:41:42.892157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-07 12:41:42.892196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:19.744 [2024-10-07 12:41:42.892232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:19.744 [2024-10-07 12:41:42.892299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-07 12:41:42.892376] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:19.744 [2024-10-07 12:41:42.892524] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:19.744 [2024-10-07 12:41:42.892629] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:19.744 [2024-10-07 12:41:42.892729] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:19.744 [2024-10-07 12:41:42.892791] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:19.744 [2024-10-07 12:41:42.892841] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:19.744 [2024-10-07 12:41:42.892876] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:19.744 [2024-10-07 12:41:42.892888] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:19.744 [2024-10-07 12:41:42.892913] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:19.744 [2024-10-07 12:41:42.892924] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:19.744 [2024-10-07 12:41:42.892938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-07 12:41:42.892959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:19.744 [2024-10-07 12:41:42.892973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:31:19.744 [2024-10-07 12:41:42.892984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-07 12:41:42.893065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-07 12:41:42.893082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:19.744 [2024-10-07 12:41:42.893096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:19.744 [2024-10-07 12:41:42.893106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-07 12:41:42.893196] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:19.744 [2024-10-07 12:41:42.893210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:19.744 [2024-10-07 12:41:42.893223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:19.744 [2024-10-07 12:41:42.893234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:19.744 [2024-10-07 12:41:42.893258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:19.744 [2024-10-07 12:41:42.893281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:19.744 [2024-10-07 12:41:42.893293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:19.744 [2024-10-07 12:41:42.893314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:19.744 [2024-10-07 12:41:42.893324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:19.744 [2024-10-07 12:41:42.893336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:19.744 [2024-10-07 12:41:42.893345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:19.744 [2024-10-07 12:41:42.893358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:19.744 [2024-10-07 12:41:42.893368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:19.744 [2024-10-07 12:41:42.893393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:19.744 [2024-10-07 12:41:42.893405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:19.744 [2024-10-07 12:41:42.893427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.744 [2024-10-07 12:41:42.893450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:19.744 [2024-10-07 12:41:42.893460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.744 [2024-10-07 12:41:42.893481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:19.744 [2024-10-07 12:41:42.893493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.744 [2024-10-07 12:41:42.893514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:19.744 [2024-10-07 12:41:42.893523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.744 [2024-10-07 12:41:42.893544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:19.744 [2024-10-07 12:41:42.893559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:19.744 [2024-10-07 12:41:42.893580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:19.744 [2024-10-07 12:41:42.893589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:19.744 [2024-10-07 12:41:42.893601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:19.744 [2024-10-07 12:41:42.893610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:19.744 [2024-10-07 12:41:42.893622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:19.744 [2024-10-07 12:41:42.893631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:19.744 [2024-10-07 12:41:42.893652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:19.744 [2024-10-07 12:41:42.893663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893672] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:19.744 [2024-10-07 12:41:42.893685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:19.744 [2024-10-07 12:41:42.893698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:19.744 [2024-10-07 12:41:42.893711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.744 [2024-10-07 12:41:42.893720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:19.744 [2024-10-07 12:41:42.893736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:19.744 [2024-10-07 12:41:42.893746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:19.744 [2024-10-07 12:41:42.893758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:19.744 [2024-10-07 12:41:42.893767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:19.744 [2024-10-07 12:41:42.893780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:19.744 [2024-10-07 12:41:42.893795] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:19.744 [2024-10-07 12:41:42.893811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:19.744 [2024-10-07 12:41:42.893823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:19.744 [2024-10-07 12:41:42.893836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:19.744 [2024-10-07 12:41:42.893847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:19.744 [2024-10-07 12:41:42.893860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:19.744 [2024-10-07 12:41:42.893871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:19.744 [2024-10-07 12:41:42.893884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:19.744 [2024-10-07 12:41:42.893895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:19.744 [2024-10-07 12:41:42.893926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:19.744 [2024-10-07 12:41:42.893937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:19.744 [2024-10-07 12:41:42.893954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:19.744 [2024-10-07 12:41:42.893964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:19.745 [2024-10-07 12:41:42.893977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:19.745 [2024-10-07 12:41:42.893988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:19.745 [2024-10-07 12:41:42.894001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:19.745 [2024-10-07 12:41:42.894011] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:19.745 [2024-10-07 12:41:42.894026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:19.745 [2024-10-07 12:41:42.894038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:19.745 [2024-10-07 12:41:42.894053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:19.745 [2024-10-07 12:41:42.894063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:19.745 [2024-10-07 12:41:42.894077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:19.745 [2024-10-07 12:41:42.894088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.745 [2024-10-07 12:41:42.894101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:19.745 [2024-10-07 12:41:42.894113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:31:19.745 [2024-10-07 12:41:42.894126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.745 [2024-10-07 12:41:42.894170] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:19.745 [2024-10-07 12:41:42.894188] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:23.942 [2024-10-07 12:41:46.793947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:46.794210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:23.942 [2024-10-07 12:41:46.794236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3906.106 ms 00:31:23.942 [2024-10-07 12:41:46.794250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:46.830930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:46.830983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:23.942 [2024-10-07 12:41:46.830998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.403 ms 00:31:23.942 [2024-10-07 12:41:46.831011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:46.831133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:46.831150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:23.942 [2024-10-07 12:41:46.831161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:23.942 [2024-10-07 12:41:46.831179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:46.885638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:46.885690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:23.942 [2024-10-07 12:41:46.885712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.493 ms 00:31:23.942 [2024-10-07 12:41:46.885728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:46.885767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:46.885784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:23.942 [2024-10-07 12:41:46.885798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:23.942 [2024-10-07 12:41:46.885826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:46.886350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:46.886372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:23.942 [2024-10-07 12:41:46.886386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:31:23.942 [2024-10-07 12:41:46.886405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:46.886525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:46.886550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:23.942 [2024-10-07 12:41:46.886563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:31:23.942 [2024-10-07 12:41:46.886582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:46.907149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:46.907189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:23.942 [2024-10-07 12:41:46.907202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.577 ms 00:31:23.942 [2024-10-07 12:41:46.907214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:46.919839] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:23.942 [2024-10-07 12:41:46.923108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:46.923137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:23.942 [2024-10-07 12:41:46.923155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.837 ms 00:31:23.942 [2024-10-07 12:41:46.923165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:47.020101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:47.020149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:23.942 [2024-10-07 12:41:47.020170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.057 ms 00:31:23.942 [2024-10-07 12:41:47.020181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:47.020348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:47.020362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:23.942 [2024-10-07 12:41:47.020379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:31:23.942 [2024-10-07 12:41:47.020388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:47.054617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:47.054659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:23.942 [2024-10-07 12:41:47.054674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.230 ms 00:31:23.942 [2024-10-07 12:41:47.054685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:47.089000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:47.089035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:23.942 [2024-10-07 12:41:47.089053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.326 ms 00:31:23.942 [2024-10-07 12:41:47.089063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:47.089718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:47.089742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:23.942 [2024-10-07 12:41:47.089756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:31:23.942 [2024-10-07 12:41:47.089767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:47.188847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:47.188887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:23.942 [2024-10-07 12:41:47.188928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.184 ms 00:31:23.942 [2024-10-07 12:41:47.188943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.942 [2024-10-07 12:41:47.224735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.942 [2024-10-07 12:41:47.224772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:23.942 [2024-10-07 12:41:47.224789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.767 ms 00:31:23.942 [2024-10-07 12:41:47.224799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.202 [2024-10-07 12:41:47.258561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.202 [2024-10-07 12:41:47.258596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:24.202 [2024-10-07 12:41:47.258612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.772 ms 00:31:24.202 [2024-10-07 12:41:47.258621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.202 [2024-10-07 12:41:47.292341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.202 [2024-10-07 12:41:47.292513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:24.202 [2024-10-07 12:41:47.292540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.729 ms 00:31:24.202 [2024-10-07 12:41:47.292550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.202 [2024-10-07 12:41:47.292659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.202 [2024-10-07 12:41:47.292673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:24.202 [2024-10-07 12:41:47.292694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:24.202 [2024-10-07 12:41:47.292704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.202 [2024-10-07 12:41:47.292806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.202 [2024-10-07 12:41:47.292819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:24.202 [2024-10-07 12:41:47.292834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:24.202 [2024-10-07 12:41:47.292844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.202 [2024-10-07 12:41:47.293850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4426.793 ms, result 0 00:31:24.202 { 00:31:24.202 "name": "ftl0", 00:31:24.202 "uuid": "dbbdbf03-3353-47c3-8ed4-2dbb0b74638e" 00:31:24.202 } 00:31:24.202 12:41:47 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:24.202 12:41:47 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:24.461 12:41:47 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:24.461 12:41:47 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:24.461 [2024-10-07 12:41:47.708507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.462 [2024-10-07 12:41:47.708723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:24.462 [2024-10-07 12:41:47.708749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:24.462 [2024-10-07 12:41:47.708765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.462 [2024-10-07 12:41:47.708803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:24.462 [2024-10-07 12:41:47.712955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.462 [2024-10-07 12:41:47.712987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:24.462 [2024-10-07 12:41:47.713018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.125 ms 00:31:24.462 [2024-10-07 12:41:47.713029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.462 [2024-10-07 12:41:47.713274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.462 [2024-10-07 12:41:47.713289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:24.462 [2024-10-07 12:41:47.713302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:31:24.462 [2024-10-07 12:41:47.713312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.462 [2024-10-07 12:41:47.715753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.462 [2024-10-07 12:41:47.715917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:24.462 [2024-10-07 12:41:47.715942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.425 ms 00:31:24.462 [2024-10-07 12:41:47.715956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.462 [2024-10-07 12:41:47.720904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.462 [2024-10-07 12:41:47.720942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:24.462 [2024-10-07 12:41:47.720956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.925 ms 00:31:24.462 [2024-10-07 12:41:47.720966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.723 [2024-10-07 12:41:47.755942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.723 [2024-10-07 12:41:47.755981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:24.723 [2024-10-07 12:41:47.755998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.977 ms 00:31:24.723 [2024-10-07 12:41:47.756008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.723 [2024-10-07 12:41:47.778095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.723 [2024-10-07 12:41:47.778262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:24.723 [2024-10-07 12:41:47.778290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.074 ms 00:31:24.723 [2024-10-07 12:41:47.778301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.723 [2024-10-07 12:41:47.778488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.723 [2024-10-07 12:41:47.778507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:24.723 [2024-10-07 12:41:47.778521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:31:24.723 [2024-10-07 12:41:47.778531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.723 [2024-10-07 12:41:47.813647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.723 [2024-10-07 12:41:47.813826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:24.723 [2024-10-07 12:41:47.813853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.149 ms 00:31:24.723 [2024-10-07 12:41:47.813864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.723 [2024-10-07 12:41:47.847981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.723 [2024-10-07 12:41:47.848159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:24.723 [2024-10-07 12:41:47.848185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.051 ms 00:31:24.723 [2024-10-07 12:41:47.848196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.723 [2024-10-07 12:41:47.882496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.723 [2024-10-07 12:41:47.882533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:24.723 [2024-10-07 12:41:47.882548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.245 ms 00:31:24.723 [2024-10-07 12:41:47.882558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.723 [2024-10-07 12:41:47.916703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.723 [2024-10-07 12:41:47.916737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:24.723 [2024-10-07 12:41:47.916752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.107 ms 00:31:24.723 [2024-10-07 12:41:47.916762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.723 [2024-10-07 12:41:47.916803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:24.723 [2024-10-07 12:41:47.916819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.916983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:24.723 [2024-10-07 12:41:47.917500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.917998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.918009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.918022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.918033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.918046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.918057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.918070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.918081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.918096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:24.724 [2024-10-07 12:41:47.918114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:24.724 [2024-10-07 12:41:47.918132] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbbdbf03-3353-47c3-8ed4-2dbb0b74638e 00:31:24.724 [2024-10-07 12:41:47.918142] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:24.724 [2024-10-07 12:41:47.918157] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:24.724 [2024-10-07 12:41:47.918167] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:24.724 [2024-10-07 12:41:47.918179] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:24.724 [2024-10-07 12:41:47.918190] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:24.724 [2024-10-07 12:41:47.918204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:24.724 [2024-10-07 12:41:47.918216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:24.724 [2024-10-07 12:41:47.918228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:24.724 [2024-10-07 12:41:47.918237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:24.724 [2024-10-07 12:41:47.918249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.724 [2024-10-07 12:41:47.918259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:24.724 [2024-10-07 12:41:47.918272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:31:24.724 [2024-10-07 12:41:47.918282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.724 [2024-10-07 12:41:47.937460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.724 [2024-10-07 12:41:47.937603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:24.724 [2024-10-07 12:41:47.937627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.151 ms 00:31:24.724 [2024-10-07 12:41:47.937638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.724 [2024-10-07 12:41:47.938202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.724 [2024-10-07 12:41:47.938216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:24.724 [2024-10-07 12:41:47.938230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:31:24.724 [2024-10-07 12:41:47.938240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.724 [2024-10-07 12:41:47.992446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.724 [2024-10-07 12:41:47.992481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:24.724 [2024-10-07 12:41:47.992496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.724 [2024-10-07 12:41:47.992509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.724 [2024-10-07 12:41:47.992562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.724 [2024-10-07 12:41:47.992572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:24.724 [2024-10-07 12:41:47.992585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.724 [2024-10-07 12:41:47.992595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.724 [2024-10-07 12:41:47.992683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.724 [2024-10-07 12:41:47.992697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:24.724 [2024-10-07 12:41:47.992711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.724 [2024-10-07 12:41:47.992721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.724 [2024-10-07 12:41:47.992748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.724 [2024-10-07 12:41:47.992758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:24.724 [2024-10-07 12:41:47.992770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.724 [2024-10-07 12:41:47.992780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.983 [2024-10-07 12:41:48.108695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.983 [2024-10-07 12:41:48.108743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:24.983 [2024-10-07 12:41:48.108760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.983 [2024-10-07 12:41:48.108770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.983 [2024-10-07 12:41:48.202260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.983 [2024-10-07 12:41:48.202307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:24.983 [2024-10-07 12:41:48.202323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.983 [2024-10-07 12:41:48.202334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.983 [2024-10-07 12:41:48.202434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.983 [2024-10-07 12:41:48.202446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:24.983 [2024-10-07 12:41:48.202460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.983 [2024-10-07 12:41:48.202471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.983 [2024-10-07 12:41:48.202525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.983 [2024-10-07 12:41:48.202540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:24.983 [2024-10-07 12:41:48.202553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.983 [2024-10-07 12:41:48.202563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.983 [2024-10-07 12:41:48.202667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.983 [2024-10-07 12:41:48.202681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:24.983 [2024-10-07 12:41:48.202694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.983 [2024-10-07 12:41:48.202704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.983 [2024-10-07 12:41:48.202742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.983 [2024-10-07 12:41:48.202754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:24.983 [2024-10-07 12:41:48.202770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.983 [2024-10-07 12:41:48.202779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.984 [2024-10-07 12:41:48.202819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.984 [2024-10-07 12:41:48.202830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:24.984 [2024-10-07 12:41:48.202842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.984 [2024-10-07 12:41:48.202853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.984 [2024-10-07 12:41:48.202923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:24.984 [2024-10-07 12:41:48.202939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:24.984 [2024-10-07 12:41:48.202961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:24.984 [2024-10-07 12:41:48.202971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.984 [2024-10-07 12:41:48.203127] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 495.373 ms, result 0 00:31:24.984 true 00:31:24.984 12:41:48 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 82405 00:31:24.984 12:41:48 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 82405 ']' 00:31:24.984 12:41:48 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 82405 00:31:24.984 12:41:48 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:31:24.984 12:41:48 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:24.984 12:41:48 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82405 00:31:25.242 killing process with pid 82405 00:31:25.242 12:41:48 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:25.242 12:41:48 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:25.242 12:41:48 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82405' 00:31:25.242 12:41:48 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 82405 00:31:25.242 12:41:48 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 82405 00:31:30.519 12:41:53 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:33.810 262144+0 records in 00:31:33.810 262144+0 records out 00:31:33.810 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.95293 s, 272 MB/s 00:31:33.810 12:41:57 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:35.717 12:41:58 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:35.717 [2024-10-07 12:41:58.838179] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:31:35.717 [2024-10-07 12:41:58.838306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82641 ] 00:31:35.976 [2024-10-07 12:41:59.013296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.976 [2024-10-07 12:41:59.215184] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.546 [2024-10-07 12:41:59.543914] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:36.546 [2024-10-07 12:41:59.543978] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:36.546 [2024-10-07 12:41:59.703367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.546 [2024-10-07 12:41:59.703415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:36.546 [2024-10-07 12:41:59.703430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:36.546 [2024-10-07 12:41:59.703443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.546 [2024-10-07 12:41:59.703488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.546 [2024-10-07 12:41:59.703500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:36.546 [2024-10-07 12:41:59.703510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:36.546 [2024-10-07 12:41:59.703519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.546 [2024-10-07 12:41:59.703539] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:36.546 [2024-10-07 12:41:59.704489] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:36.546 [2024-10-07 12:41:59.704517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.546 [2024-10-07 12:41:59.704528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:36.546 [2024-10-07 12:41:59.704539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:31:36.547 [2024-10-07 12:41:59.704549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.547 [2024-10-07 12:41:59.706038] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:36.547 [2024-10-07 12:41:59.723565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.547 [2024-10-07 12:41:59.723605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:36.547 [2024-10-07 12:41:59.723619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.556 ms 00:31:36.547 [2024-10-07 12:41:59.723645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.547 [2024-10-07 12:41:59.723705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.547 [2024-10-07 12:41:59.723718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:36.547 [2024-10-07 12:41:59.723729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:36.547 [2024-10-07 12:41:59.723739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.547 [2024-10-07 12:41:59.730632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.547 [2024-10-07 12:41:59.730660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:36.547 [2024-10-07 12:41:59.730672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.834 ms 00:31:36.547 [2024-10-07 12:41:59.730683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.547 [2024-10-07 12:41:59.730757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.547 [2024-10-07 12:41:59.730769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:36.547 [2024-10-07 12:41:59.730779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:36.547 [2024-10-07 12:41:59.730788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.547 [2024-10-07 12:41:59.730827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.547 [2024-10-07 12:41:59.730838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:36.547 [2024-10-07 12:41:59.730848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:36.547 [2024-10-07 12:41:59.730857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.547 [2024-10-07 12:41:59.730878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:36.547 [2024-10-07 12:41:59.735595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.547 [2024-10-07 12:41:59.735629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:36.547 [2024-10-07 12:41:59.735641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.729 ms 00:31:36.547 [2024-10-07 12:41:59.735650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.547 [2024-10-07 12:41:59.735678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.547 [2024-10-07 12:41:59.735689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:36.547 [2024-10-07 12:41:59.735699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:36.547 [2024-10-07 12:41:59.735708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.547 [2024-10-07 12:41:59.735761] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:36.547 [2024-10-07 12:41:59.735783] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:36.547 [2024-10-07 12:41:59.735816] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:36.547 [2024-10-07 12:41:59.735832] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:36.547 [2024-10-07 12:41:59.735932] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:36.547 [2024-10-07 12:41:59.735962] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:36.547 [2024-10-07 12:41:59.735975] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:36.547 [2024-10-07 12:41:59.735991] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736003] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736014] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:36.547 [2024-10-07 12:41:59.736024] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:36.547 [2024-10-07 12:41:59.736033] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:36.547 [2024-10-07 12:41:59.736043] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:36.547 [2024-10-07 12:41:59.736053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.547 [2024-10-07 12:41:59.736063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:36.547 [2024-10-07 12:41:59.736073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:31:36.547 [2024-10-07 12:41:59.736082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.547 [2024-10-07 12:41:59.736151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.547 [2024-10-07 12:41:59.736165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:36.547 [2024-10-07 12:41:59.736175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:36.547 [2024-10-07 12:41:59.736184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.547 [2024-10-07 12:41:59.736274] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:36.547 [2024-10-07 12:41:59.736288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:36.547 [2024-10-07 12:41:59.736299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:36.547 [2024-10-07 12:41:59.736329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:36.547 [2024-10-07 12:41:59.736356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:36.547 [2024-10-07 12:41:59.736375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:36.547 [2024-10-07 12:41:59.736384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:36.547 [2024-10-07 12:41:59.736393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:36.547 [2024-10-07 12:41:59.736428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:36.547 [2024-10-07 12:41:59.736438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:36.547 [2024-10-07 12:41:59.736447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:36.547 [2024-10-07 12:41:59.736466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:36.547 [2024-10-07 12:41:59.736494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:36.547 [2024-10-07 12:41:59.736520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:36.547 [2024-10-07 12:41:59.736548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:36.547 [2024-10-07 12:41:59.736577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:36.547 [2024-10-07 12:41:59.736603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:36.547 [2024-10-07 12:41:59.736621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:36.547 [2024-10-07 12:41:59.736630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:36.547 [2024-10-07 12:41:59.736639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:36.547 [2024-10-07 12:41:59.736648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:36.547 [2024-10-07 12:41:59.736657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:36.547 [2024-10-07 12:41:59.736666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:36.547 [2024-10-07 12:41:59.736684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:36.547 [2024-10-07 12:41:59.736693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736702] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:36.547 [2024-10-07 12:41:59.736712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:36.547 [2024-10-07 12:41:59.736725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.547 [2024-10-07 12:41:59.736744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:36.547 [2024-10-07 12:41:59.736754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:36.547 [2024-10-07 12:41:59.736763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:36.547 [2024-10-07 12:41:59.736773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:36.547 [2024-10-07 12:41:59.736782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:36.547 [2024-10-07 12:41:59.736792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:36.547 [2024-10-07 12:41:59.736802] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:36.547 [2024-10-07 12:41:59.736815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:36.548 [2024-10-07 12:41:59.736826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:36.548 [2024-10-07 12:41:59.736836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:36.548 [2024-10-07 12:41:59.736847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:36.548 [2024-10-07 12:41:59.736859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:36.548 [2024-10-07 12:41:59.736869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:36.548 [2024-10-07 12:41:59.736879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:36.548 [2024-10-07 12:41:59.736890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:36.548 [2024-10-07 12:41:59.736900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:36.548 [2024-10-07 12:41:59.736910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:36.548 [2024-10-07 12:41:59.736921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:36.548 [2024-10-07 12:41:59.736942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:36.548 [2024-10-07 12:41:59.736953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:36.548 [2024-10-07 12:41:59.736964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:36.548 [2024-10-07 12:41:59.736975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:36.548 [2024-10-07 12:41:59.736985] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:36.548 [2024-10-07 12:41:59.736997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:36.548 [2024-10-07 12:41:59.737008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:36.548 [2024-10-07 12:41:59.737018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:36.548 [2024-10-07 12:41:59.737029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:36.548 [2024-10-07 12:41:59.737039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:36.548 [2024-10-07 12:41:59.737051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.548 [2024-10-07 12:41:59.737061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:36.548 [2024-10-07 12:41:59.737071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:31:36.548 [2024-10-07 12:41:59.737081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.548 [2024-10-07 12:41:59.783346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.548 [2024-10-07 12:41:59.783529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:36.548 [2024-10-07 12:41:59.783568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.293 ms 00:31:36.548 [2024-10-07 12:41:59.783579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.548 [2024-10-07 12:41:59.783662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.548 [2024-10-07 12:41:59.783674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:36.548 [2024-10-07 12:41:59.783684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:36.548 [2024-10-07 12:41:59.783694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.548 [2024-10-07 12:41:59.822886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.548 [2024-10-07 12:41:59.822923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:36.548 [2024-10-07 12:41:59.822940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.201 ms 00:31:36.548 [2024-10-07 12:41:59.822958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.548 [2024-10-07 12:41:59.822992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.548 [2024-10-07 12:41:59.823003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:36.548 [2024-10-07 12:41:59.823013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:36.548 [2024-10-07 12:41:59.823022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.548 [2024-10-07 12:41:59.823499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.548 [2024-10-07 12:41:59.823512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:36.548 [2024-10-07 12:41:59.823522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:31:36.548 [2024-10-07 12:41:59.823538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.548 [2024-10-07 12:41:59.823642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.548 [2024-10-07 12:41:59.823655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:36.548 [2024-10-07 12:41:59.823665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:31:36.548 [2024-10-07 12:41:59.823674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:41:59.841524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:41:59.841557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:36.808 [2024-10-07 12:41:59.841569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.860 ms 00:31:36.808 [2024-10-07 12:41:59.841579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:41:59.859463] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:36.808 [2024-10-07 12:41:59.859501] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:36.808 [2024-10-07 12:41:59.859516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:41:59.859526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:36.808 [2024-10-07 12:41:59.859536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.870 ms 00:31:36.808 [2024-10-07 12:41:59.859545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:41:59.887501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:41:59.887541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:36.808 [2024-10-07 12:41:59.887554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.961 ms 00:31:36.808 [2024-10-07 12:41:59.887565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:41:59.904962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:41:59.904997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:36.808 [2024-10-07 12:41:59.905010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.379 ms 00:31:36.808 [2024-10-07 12:41:59.905019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:41:59.921923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:41:59.921959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:36.808 [2024-10-07 12:41:59.921972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.893 ms 00:31:36.808 [2024-10-07 12:41:59.921981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:41:59.922693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:41:59.922716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:36.808 [2024-10-07 12:41:59.922726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:31:36.808 [2024-10-07 12:41:59.922736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:42:00.005741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:42:00.005802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:36.808 [2024-10-07 12:42:00.005819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.119 ms 00:31:36.808 [2024-10-07 12:42:00.005831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:42:00.016045] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:36.808 [2024-10-07 12:42:00.018413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:42:00.018443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:36.808 [2024-10-07 12:42:00.018456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.560 ms 00:31:36.808 [2024-10-07 12:42:00.018466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:42:00.018543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:42:00.018556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:36.808 [2024-10-07 12:42:00.018567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:36.808 [2024-10-07 12:42:00.018576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:42:00.018648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:42:00.018660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:36.808 [2024-10-07 12:42:00.018671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:36.808 [2024-10-07 12:42:00.018681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:42:00.018700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:42:00.018714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:36.808 [2024-10-07 12:42:00.018724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:36.808 [2024-10-07 12:42:00.018734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:42:00.018767] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:36.808 [2024-10-07 12:42:00.018779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:42:00.018789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:36.808 [2024-10-07 12:42:00.018799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:36.808 [2024-10-07 12:42:00.018813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:42:00.055651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:42:00.055807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:36.808 [2024-10-07 12:42:00.055975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.878 ms 00:31:36.808 [2024-10-07 12:42:00.056018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:42:00.056113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.808 [2024-10-07 12:42:00.056240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:36.808 [2024-10-07 12:42:00.056313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:36.808 [2024-10-07 12:42:00.056342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.808 [2024-10-07 12:42:00.057470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 354.248 ms, result 0 00:31:38.185  [2024-10-07T12:42:02.414Z] Copying: 22/1024 [MB] (22 MBps) [2024-10-07T12:42:03.352Z] Copying: 44/1024 [MB] (21 MBps) [2024-10-07T12:42:04.302Z] Copying: 66/1024 [MB] (22 MBps) [2024-10-07T12:42:05.291Z] Copying: 89/1024 [MB] (23 MBps) [2024-10-07T12:42:06.229Z] Copying: 113/1024 [MB] (23 MBps) [2024-10-07T12:42:07.166Z] Copying: 138/1024 [MB] (24 MBps) [2024-10-07T12:42:08.103Z] Copying: 162/1024 [MB] (24 MBps) [2024-10-07T12:42:09.480Z] Copying: 186/1024 [MB] (23 MBps) [2024-10-07T12:42:10.417Z] Copying: 208/1024 [MB] (22 MBps) [2024-10-07T12:42:11.354Z] Copying: 230/1024 [MB] (21 MBps) [2024-10-07T12:42:12.290Z] Copying: 251/1024 [MB] (21 MBps) [2024-10-07T12:42:13.227Z] Copying: 274/1024 [MB] (22 MBps) [2024-10-07T12:42:14.164Z] Copying: 297/1024 [MB] (23 MBps) [2024-10-07T12:42:15.101Z] Copying: 319/1024 [MB] (22 MBps) [2024-10-07T12:42:16.479Z] Copying: 341/1024 [MB] (22 MBps) [2024-10-07T12:42:17.048Z] Copying: 365/1024 [MB] (23 MBps) [2024-10-07T12:42:18.427Z] Copying: 388/1024 [MB] (22 MBps) [2024-10-07T12:42:19.365Z] Copying: 410/1024 [MB] (22 MBps) [2024-10-07T12:42:20.302Z] Copying: 432/1024 [MB] (21 MBps) [2024-10-07T12:42:21.240Z] Copying: 454/1024 [MB] (21 MBps) [2024-10-07T12:42:22.178Z] Copying: 477/1024 [MB] (23 MBps) [2024-10-07T12:42:23.115Z] Copying: 499/1024 [MB] (22 MBps) [2024-10-07T12:42:24.061Z] Copying: 521/1024 [MB] (21 MBps) [2024-10-07T12:42:25.042Z] Copying: 543/1024 [MB] (21 MBps) [2024-10-07T12:42:26.422Z] Copying: 567/1024 [MB] (23 MBps) [2024-10-07T12:42:27.360Z] Copying: 590/1024 [MB] (23 MBps) [2024-10-07T12:42:28.298Z] Copying: 613/1024 [MB] (22 MBps) [2024-10-07T12:42:29.235Z] Copying: 635/1024 [MB] (22 MBps) [2024-10-07T12:42:30.172Z] Copying: 657/1024 [MB] (22 MBps) [2024-10-07T12:42:31.110Z] Copying: 680/1024 [MB] (22 MBps) [2024-10-07T12:42:32.047Z] Copying: 703/1024 [MB] (23 MBps) [2024-10-07T12:42:33.426Z] Copying: 726/1024 [MB] (22 MBps) [2024-10-07T12:42:34.364Z] Copying: 749/1024 [MB] (23 MBps) [2024-10-07T12:42:35.301Z] Copying: 773/1024 [MB] (23 MBps) [2024-10-07T12:42:36.239Z] Copying: 794/1024 [MB] (21 MBps) [2024-10-07T12:42:37.176Z] Copying: 815/1024 [MB] (20 MBps) [2024-10-07T12:42:38.114Z] Copying: 836/1024 [MB] (20 MBps) [2024-10-07T12:42:39.052Z] Copying: 857/1024 [MB] (21 MBps) [2024-10-07T12:42:40.431Z] Copying: 878/1024 [MB] (20 MBps) [2024-10-07T12:42:41.370Z] Copying: 900/1024 [MB] (22 MBps) [2024-10-07T12:42:42.308Z] Copying: 922/1024 [MB] (21 MBps) [2024-10-07T12:42:43.246Z] Copying: 943/1024 [MB] (21 MBps) [2024-10-07T12:42:44.201Z] Copying: 966/1024 [MB] (23 MBps) [2024-10-07T12:42:45.162Z] Copying: 990/1024 [MB] (23 MBps) [2024-10-07T12:42:45.732Z] Copying: 1012/1024 [MB] (22 MBps) [2024-10-07T12:42:45.732Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-10-07 12:42:45.471264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.441 [2024-10-07 12:42:45.471317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:22.441 [2024-10-07 12:42:45.471335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:22.441 [2024-10-07 12:42:45.471347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.441 [2024-10-07 12:42:45.471380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:22.441 [2024-10-07 12:42:45.475649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.441 [2024-10-07 12:42:45.475690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:22.441 [2024-10-07 12:42:45.475705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.257 ms 00:32:22.441 [2024-10-07 12:42:45.475716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.441 [2024-10-07 12:42:45.477630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.441 [2024-10-07 12:42:45.477803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:22.441 [2024-10-07 12:42:45.477828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:32:22.441 [2024-10-07 12:42:45.477842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.441 [2024-10-07 12:42:45.477889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.441 [2024-10-07 12:42:45.477915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:22.441 [2024-10-07 12:42:45.477929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:22.441 [2024-10-07 12:42:45.477940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.441 [2024-10-07 12:42:45.477988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.441 [2024-10-07 12:42:45.478001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:22.441 [2024-10-07 12:42:45.478013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:22.441 [2024-10-07 12:42:45.478024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.441 [2024-10-07 12:42:45.478042] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:22.441 [2024-10-07 12:42:45.478078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.478971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.479001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.479014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.479026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.479038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.479050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.479062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:22.441 [2024-10-07 12:42:45.479074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:22.442 [2024-10-07 12:42:45.479408] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:22.442 [2024-10-07 12:42:45.479420] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbbdbf03-3353-47c3-8ed4-2dbb0b74638e 00:32:22.442 [2024-10-07 12:42:45.479431] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:22.442 [2024-10-07 12:42:45.479442] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:22.442 [2024-10-07 12:42:45.479453] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:22.442 [2024-10-07 12:42:45.479465] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:22.442 [2024-10-07 12:42:45.479475] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:22.442 [2024-10-07 12:42:45.479487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:22.442 [2024-10-07 12:42:45.479506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:22.442 [2024-10-07 12:42:45.479516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:22.442 [2024-10-07 12:42:45.479528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:22.442 [2024-10-07 12:42:45.479539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.442 [2024-10-07 12:42:45.479550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:22.442 [2024-10-07 12:42:45.479567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:32:22.442 [2024-10-07 12:42:45.479582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.442 [2024-10-07 12:42:45.498689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.442 [2024-10-07 12:42:45.498730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:22.442 [2024-10-07 12:42:45.498744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.114 ms 00:32:22.442 [2024-10-07 12:42:45.498755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.442 [2024-10-07 12:42:45.499325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.442 [2024-10-07 12:42:45.499341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:22.442 [2024-10-07 12:42:45.499361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:32:22.442 [2024-10-07 12:42:45.499373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.442 [2024-10-07 12:42:45.544280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.442 [2024-10-07 12:42:45.544437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:22.442 [2024-10-07 12:42:45.544480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.442 [2024-10-07 12:42:45.544492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.442 [2024-10-07 12:42:45.544551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.442 [2024-10-07 12:42:45.544564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:22.442 [2024-10-07 12:42:45.544584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.442 [2024-10-07 12:42:45.544596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.442 [2024-10-07 12:42:45.544661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.442 [2024-10-07 12:42:45.544676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:22.442 [2024-10-07 12:42:45.544688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.442 [2024-10-07 12:42:45.544706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.442 [2024-10-07 12:42:45.544726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.442 [2024-10-07 12:42:45.544738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:22.442 [2024-10-07 12:42:45.544750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.442 [2024-10-07 12:42:45.544766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.442 [2024-10-07 12:42:45.660466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.442 [2024-10-07 12:42:45.660523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:22.442 [2024-10-07 12:42:45.660540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.442 [2024-10-07 12:42:45.660552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.702 [2024-10-07 12:42:45.755388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.702 [2024-10-07 12:42:45.755446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:22.702 [2024-10-07 12:42:45.755462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.702 [2024-10-07 12:42:45.755483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.702 [2024-10-07 12:42:45.755599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.702 [2024-10-07 12:42:45.755615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:22.702 [2024-10-07 12:42:45.755642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.702 [2024-10-07 12:42:45.755654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.702 [2024-10-07 12:42:45.755699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.702 [2024-10-07 12:42:45.755713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:22.702 [2024-10-07 12:42:45.755725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.702 [2024-10-07 12:42:45.755737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.702 [2024-10-07 12:42:45.755831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.702 [2024-10-07 12:42:45.755845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:22.702 [2024-10-07 12:42:45.755856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.702 [2024-10-07 12:42:45.755868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.702 [2024-10-07 12:42:45.755927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.702 [2024-10-07 12:42:45.755958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:22.702 [2024-10-07 12:42:45.755971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.702 [2024-10-07 12:42:45.755982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.702 [2024-10-07 12:42:45.756052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.702 [2024-10-07 12:42:45.756065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:22.702 [2024-10-07 12:42:45.756078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.702 [2024-10-07 12:42:45.756089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.702 [2024-10-07 12:42:45.756146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:22.702 [2024-10-07 12:42:45.756159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:22.702 [2024-10-07 12:42:45.756170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:22.702 [2024-10-07 12:42:45.756183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.702 [2024-10-07 12:42:45.756331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 285.474 ms, result 0 00:32:24.080 00:32:24.080 00:32:24.080 12:42:46 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:24.080 [2024-10-07 12:42:47.078215] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:32:24.080 [2024-10-07 12:42:47.078528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83121 ] 00:32:24.080 [2024-10-07 12:42:47.250605] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.339 [2024-10-07 12:42:47.446728] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.599 [2024-10-07 12:42:47.796565] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:24.599 [2024-10-07 12:42:47.796636] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:24.859 [2024-10-07 12:42:47.957940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.859 [2024-10-07 12:42:47.957993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:24.859 [2024-10-07 12:42:47.958011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:24.859 [2024-10-07 12:42:47.958026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.859 [2024-10-07 12:42:47.958078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.859 [2024-10-07 12:42:47.958091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:24.859 [2024-10-07 12:42:47.958104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:24.859 [2024-10-07 12:42:47.958114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.859 [2024-10-07 12:42:47.958138] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:24.859 [2024-10-07 12:42:47.959092] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:24.859 [2024-10-07 12:42:47.959127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.859 [2024-10-07 12:42:47.959139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:24.859 [2024-10-07 12:42:47.959152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:32:24.859 [2024-10-07 12:42:47.959163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.859 [2024-10-07 12:42:47.959510] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:24.859 [2024-10-07 12:42:47.959533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.859 [2024-10-07 12:42:47.959544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:24.859 [2024-10-07 12:42:47.959557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:24.859 [2024-10-07 12:42:47.959569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.859 [2024-10-07 12:42:47.959618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.860 [2024-10-07 12:42:47.959631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:24.860 [2024-10-07 12:42:47.959643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:24.860 [2024-10-07 12:42:47.959658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.860 [2024-10-07 12:42:47.960151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.860 [2024-10-07 12:42:47.960177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:24.860 [2024-10-07 12:42:47.960189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:32:24.860 [2024-10-07 12:42:47.960200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.860 [2024-10-07 12:42:47.960272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.860 [2024-10-07 12:42:47.960285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:24.860 [2024-10-07 12:42:47.960301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:24.860 [2024-10-07 12:42:47.960312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.860 [2024-10-07 12:42:47.960336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.860 [2024-10-07 12:42:47.960348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:24.860 [2024-10-07 12:42:47.960359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:24.860 [2024-10-07 12:42:47.960369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.860 [2024-10-07 12:42:47.960393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:24.860 [2024-10-07 12:42:47.965554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.860 [2024-10-07 12:42:47.965589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:24.860 [2024-10-07 12:42:47.965602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.174 ms 00:32:24.860 [2024-10-07 12:42:47.965613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.860 [2024-10-07 12:42:47.965644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.860 [2024-10-07 12:42:47.965655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:24.860 [2024-10-07 12:42:47.965671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:24.860 [2024-10-07 12:42:47.965681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.860 [2024-10-07 12:42:47.965738] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:24.860 [2024-10-07 12:42:47.965764] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:24.860 [2024-10-07 12:42:47.965798] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:24.860 [2024-10-07 12:42:47.965816] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:24.860 [2024-10-07 12:42:47.965910] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:24.860 [2024-10-07 12:42:47.965928] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:24.860 [2024-10-07 12:42:47.965942] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:24.860 [2024-10-07 12:42:47.965956] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:24.860 [2024-10-07 12:42:47.965968] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:24.860 [2024-10-07 12:42:47.965979] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:24.860 [2024-10-07 12:42:47.965991] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:24.860 [2024-10-07 12:42:47.966001] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:24.860 [2024-10-07 12:42:47.966011] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:24.860 [2024-10-07 12:42:47.966022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.860 [2024-10-07 12:42:47.966033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:24.860 [2024-10-07 12:42:47.966044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:32:24.860 [2024-10-07 12:42:47.966059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.860 [2024-10-07 12:42:47.966131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.860 [2024-10-07 12:42:47.966143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:24.860 [2024-10-07 12:42:47.966154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:32:24.860 [2024-10-07 12:42:47.966164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.860 [2024-10-07 12:42:47.966253] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:24.860 [2024-10-07 12:42:47.966268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:24.860 [2024-10-07 12:42:47.966280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:24.860 [2024-10-07 12:42:47.966292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:24.860 [2024-10-07 12:42:47.966318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:24.860 [2024-10-07 12:42:47.966340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:24.860 [2024-10-07 12:42:47.966350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:24.860 [2024-10-07 12:42:47.966372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:24.860 [2024-10-07 12:42:47.966383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:24.860 [2024-10-07 12:42:47.966393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:24.860 [2024-10-07 12:42:47.966403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:24.860 [2024-10-07 12:42:47.966413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:24.860 [2024-10-07 12:42:47.966433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:24.860 [2024-10-07 12:42:47.966453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:24.860 [2024-10-07 12:42:47.966463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:24.860 [2024-10-07 12:42:47.966483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.860 [2024-10-07 12:42:47.966504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:24.860 [2024-10-07 12:42:47.966515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.860 [2024-10-07 12:42:47.966535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:24.860 [2024-10-07 12:42:47.966544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.860 [2024-10-07 12:42:47.966564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:24.860 [2024-10-07 12:42:47.966575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:24.860 [2024-10-07 12:42:47.966594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:24.860 [2024-10-07 12:42:47.966604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:24.860 [2024-10-07 12:42:47.966624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:24.860 [2024-10-07 12:42:47.966634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:24.860 [2024-10-07 12:42:47.966643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:24.860 [2024-10-07 12:42:47.966653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:24.860 [2024-10-07 12:42:47.966663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:24.860 [2024-10-07 12:42:47.966672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:24.860 [2024-10-07 12:42:47.966693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:24.860 [2024-10-07 12:42:47.966704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966714] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:24.860 [2024-10-07 12:42:47.966725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:24.860 [2024-10-07 12:42:47.966736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:24.860 [2024-10-07 12:42:47.966746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:24.860 [2024-10-07 12:42:47.966756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:24.860 [2024-10-07 12:42:47.966767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:24.860 [2024-10-07 12:42:47.966777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:24.860 [2024-10-07 12:42:47.966787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:24.860 [2024-10-07 12:42:47.966797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:24.860 [2024-10-07 12:42:47.966808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:24.860 [2024-10-07 12:42:47.966819] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:24.860 [2024-10-07 12:42:47.966832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.860 [2024-10-07 12:42:47.966844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:24.860 [2024-10-07 12:42:47.966856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:24.860 [2024-10-07 12:42:47.966867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:24.860 [2024-10-07 12:42:47.966879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:24.860 [2024-10-07 12:42:47.966889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:24.860 [2024-10-07 12:42:47.967171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:24.860 [2024-10-07 12:42:47.967248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:24.860 [2024-10-07 12:42:47.967303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:24.861 [2024-10-07 12:42:47.967358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:24.861 [2024-10-07 12:42:47.967463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:24.861 [2024-10-07 12:42:47.967523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:24.861 [2024-10-07 12:42:47.967579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:24.861 [2024-10-07 12:42:47.967633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:24.861 [2024-10-07 12:42:47.967737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:24.861 [2024-10-07 12:42:47.967797] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:24.861 [2024-10-07 12:42:47.967852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.861 [2024-10-07 12:42:47.967921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:24.861 [2024-10-07 12:42:47.968081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:24.861 [2024-10-07 12:42:47.968136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:24.861 [2024-10-07 12:42:47.968238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:24.861 [2024-10-07 12:42:47.968301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:47.968379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:24.861 [2024-10-07 12:42:47.968428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 00:32:24.861 [2024-10-07 12:42:47.968462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.024540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.024712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:24.861 [2024-10-07 12:42:48.024879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.055 ms 00:32:24.861 [2024-10-07 12:42:48.024922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.025007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.025021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:24.861 [2024-10-07 12:42:48.025041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:32:24.861 [2024-10-07 12:42:48.025052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.068156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.068196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:24.861 [2024-10-07 12:42:48.068211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.106 ms 00:32:24.861 [2024-10-07 12:42:48.068222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.068258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.068271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:24.861 [2024-10-07 12:42:48.068282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:24.861 [2024-10-07 12:42:48.068294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.068429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.068444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:24.861 [2024-10-07 12:42:48.068456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:24.861 [2024-10-07 12:42:48.068467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.068579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.068593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:24.861 [2024-10-07 12:42:48.068604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:32:24.861 [2024-10-07 12:42:48.068616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.085380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.085557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:24.861 [2024-10-07 12:42:48.085580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.768 ms 00:32:24.861 [2024-10-07 12:42:48.085593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.085739] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:24.861 [2024-10-07 12:42:48.085758] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:24.861 [2024-10-07 12:42:48.085772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.085785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:24.861 [2024-10-07 12:42:48.085797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:24.861 [2024-10-07 12:42:48.085810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.096731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.096904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:24.861 [2024-10-07 12:42:48.096955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.918 ms 00:32:24.861 [2024-10-07 12:42:48.096975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.097090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.097104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:24.861 [2024-10-07 12:42:48.097116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:32:24.861 [2024-10-07 12:42:48.097128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.097186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.097200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:24.861 [2024-10-07 12:42:48.097213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:24.861 [2024-10-07 12:42:48.097224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.097959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.097983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:24.861 [2024-10-07 12:42:48.097996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:32:24.861 [2024-10-07 12:42:48.098008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.098032] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:24.861 [2024-10-07 12:42:48.098047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.098059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:24.861 [2024-10-07 12:42:48.098071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:24.861 [2024-10-07 12:42:48.098083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.109493] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:24.861 [2024-10-07 12:42:48.109702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.109721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:24.861 [2024-10-07 12:42:48.109734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.614 ms 00:32:24.861 [2024-10-07 12:42:48.109745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.111566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.111602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:24.861 [2024-10-07 12:42:48.111615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:32:24.861 [2024-10-07 12:42:48.111627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.111725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.111747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:24.861 [2024-10-07 12:42:48.111759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:24.861 [2024-10-07 12:42:48.111771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.111801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.111814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:24.861 [2024-10-07 12:42:48.111837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:24.861 [2024-10-07 12:42:48.111849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.111887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:24.861 [2024-10-07 12:42:48.111900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.111932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:24.861 [2024-10-07 12:42:48.111948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:24.861 [2024-10-07 12:42:48.111960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.147489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.147533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:24.861 [2024-10-07 12:42:48.147549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.563 ms 00:32:24.861 [2024-10-07 12:42:48.147577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.147656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.861 [2024-10-07 12:42:48.147677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:24.861 [2024-10-07 12:42:48.147690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:24.861 [2024-10-07 12:42:48.147701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.861 [2024-10-07 12:42:48.148822] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 190.735 ms, result 0 00:32:26.240  [2024-10-07T12:42:50.469Z] Copying: 23/1024 [MB] (23 MBps) [2024-10-07T12:42:51.407Z] Copying: 46/1024 [MB] (23 MBps) [2024-10-07T12:42:52.785Z] Copying: 70/1024 [MB] (23 MBps) [2024-10-07T12:42:53.354Z] Copying: 93/1024 [MB] (23 MBps) [2024-10-07T12:42:54.733Z] Copying: 116/1024 [MB] (22 MBps) [2024-10-07T12:42:55.671Z] Copying: 139/1024 [MB] (22 MBps) [2024-10-07T12:42:56.610Z] Copying: 162/1024 [MB] (22 MBps) [2024-10-07T12:42:57.548Z] Copying: 185/1024 [MB] (22 MBps) [2024-10-07T12:42:58.486Z] Copying: 208/1024 [MB] (22 MBps) [2024-10-07T12:42:59.424Z] Copying: 232/1024 [MB] (23 MBps) [2024-10-07T12:43:00.363Z] Copying: 256/1024 [MB] (24 MBps) [2024-10-07T12:43:01.743Z] Copying: 280/1024 [MB] (24 MBps) [2024-10-07T12:43:02.681Z] Copying: 304/1024 [MB] (23 MBps) [2024-10-07T12:43:03.617Z] Copying: 328/1024 [MB] (23 MBps) [2024-10-07T12:43:04.556Z] Copying: 351/1024 [MB] (23 MBps) [2024-10-07T12:43:05.493Z] Copying: 375/1024 [MB] (23 MBps) [2024-10-07T12:43:06.430Z] Copying: 399/1024 [MB] (23 MBps) [2024-10-07T12:43:07.367Z] Copying: 422/1024 [MB] (23 MBps) [2024-10-07T12:43:08.744Z] Copying: 447/1024 [MB] (24 MBps) [2024-10-07T12:43:09.681Z] Copying: 470/1024 [MB] (23 MBps) [2024-10-07T12:43:10.617Z] Copying: 493/1024 [MB] (23 MBps) [2024-10-07T12:43:11.554Z] Copying: 516/1024 [MB] (23 MBps) [2024-10-07T12:43:12.491Z] Copying: 540/1024 [MB] (23 MBps) [2024-10-07T12:43:13.428Z] Copying: 563/1024 [MB] (22 MBps) [2024-10-07T12:43:14.366Z] Copying: 588/1024 [MB] (24 MBps) [2024-10-07T12:43:15.744Z] Copying: 613/1024 [MB] (25 MBps) [2024-10-07T12:43:16.314Z] Copying: 639/1024 [MB] (25 MBps) [2024-10-07T12:43:17.692Z] Copying: 665/1024 [MB] (26 MBps) [2024-10-07T12:43:18.631Z] Copying: 690/1024 [MB] (25 MBps) [2024-10-07T12:43:19.568Z] Copying: 716/1024 [MB] (25 MBps) [2024-10-07T12:43:20.512Z] Copying: 741/1024 [MB] (25 MBps) [2024-10-07T12:43:21.450Z] Copying: 765/1024 [MB] (24 MBps) [2024-10-07T12:43:22.388Z] Copying: 790/1024 [MB] (24 MBps) [2024-10-07T12:43:23.324Z] Copying: 815/1024 [MB] (25 MBps) [2024-10-07T12:43:24.728Z] Copying: 840/1024 [MB] (25 MBps) [2024-10-07T12:43:25.308Z] Copying: 865/1024 [MB] (24 MBps) [2024-10-07T12:43:26.687Z] Copying: 891/1024 [MB] (25 MBps) [2024-10-07T12:43:27.625Z] Copying: 915/1024 [MB] (24 MBps) [2024-10-07T12:43:28.564Z] Copying: 940/1024 [MB] (25 MBps) [2024-10-07T12:43:29.502Z] Copying: 965/1024 [MB] (24 MBps) [2024-10-07T12:43:30.440Z] Copying: 990/1024 [MB] (24 MBps) [2024-10-07T12:43:30.701Z] Copying: 1015/1024 [MB] (24 MBps) [2024-10-07T12:43:30.701Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-10-07 12:43:30.659883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.410 [2024-10-07 12:43:30.659952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:07.410 [2024-10-07 12:43:30.659976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:07.410 [2024-10-07 12:43:30.659990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.410 [2024-10-07 12:43:30.660014] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:07.410 [2024-10-07 12:43:30.664371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.410 [2024-10-07 12:43:30.664407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:07.410 [2024-10-07 12:43:30.664420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.347 ms 00:33:07.410 [2024-10-07 12:43:30.664431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.410 [2024-10-07 12:43:30.664629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.410 [2024-10-07 12:43:30.664650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:07.410 [2024-10-07 12:43:30.664661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:33:07.410 [2024-10-07 12:43:30.664671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.410 [2024-10-07 12:43:30.664699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.410 [2024-10-07 12:43:30.664710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:07.410 [2024-10-07 12:43:30.664720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:07.410 [2024-10-07 12:43:30.664730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.410 [2024-10-07 12:43:30.664775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.410 [2024-10-07 12:43:30.664786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:07.410 [2024-10-07 12:43:30.664799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:07.410 [2024-10-07 12:43:30.664808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.410 [2024-10-07 12:43:30.664823] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:07.410 [2024-10-07 12:43:30.664836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.664848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.664859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.664870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.664887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:07.410 [2024-10-07 12:43:30.665535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.665990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.666000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.666010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.666021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.666030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.666040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.666050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.666060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:07.411 [2024-10-07 12:43:30.666077] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:07.411 [2024-10-07 12:43:30.666087] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbbdbf03-3353-47c3-8ed4-2dbb0b74638e 00:33:07.411 [2024-10-07 12:43:30.666097] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:07.411 [2024-10-07 12:43:30.666106] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:07.411 [2024-10-07 12:43:30.666115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:07.411 [2024-10-07 12:43:30.666125] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:07.411 [2024-10-07 12:43:30.666134] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:07.411 [2024-10-07 12:43:30.666148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:07.411 [2024-10-07 12:43:30.666157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:07.411 [2024-10-07 12:43:30.666166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:07.411 [2024-10-07 12:43:30.666175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:07.411 [2024-10-07 12:43:30.666183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.411 [2024-10-07 12:43:30.666193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:07.411 [2024-10-07 12:43:30.666204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:33:07.411 [2024-10-07 12:43:30.666213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.411 [2024-10-07 12:43:30.686023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.411 [2024-10-07 12:43:30.686057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:07.411 [2024-10-07 12:43:30.686070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.825 ms 00:33:07.411 [2024-10-07 12:43:30.686085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.411 [2024-10-07 12:43:30.686538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.411 [2024-10-07 12:43:30.686548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:07.411 [2024-10-07 12:43:30.686558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:33:07.411 [2024-10-07 12:43:30.686567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.728477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.728512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:07.671 [2024-10-07 12:43:30.728525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.728555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.728607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.728617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:07.671 [2024-10-07 12:43:30.728627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.728637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.728703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.728717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:07.671 [2024-10-07 12:43:30.728727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.728736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.728757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.728767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:07.671 [2024-10-07 12:43:30.728776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.728786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.848932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.848977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:07.671 [2024-10-07 12:43:30.848991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.849022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.944430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.944475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:07.671 [2024-10-07 12:43:30.944489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.944500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.944588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.944600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:07.671 [2024-10-07 12:43:30.944611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.944621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.944662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.944674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:07.671 [2024-10-07 12:43:30.944684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.944694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.944791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.944804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:07.671 [2024-10-07 12:43:30.944814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.944837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.944865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.944881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:07.671 [2024-10-07 12:43:30.944892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.944916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.944952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.944963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:07.671 [2024-10-07 12:43:30.944973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.944983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.945023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.671 [2024-10-07 12:43:30.945038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:07.671 [2024-10-07 12:43:30.945048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.671 [2024-10-07 12:43:30.945057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.671 [2024-10-07 12:43:30.945190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 285.718 ms, result 0 00:33:09.050 00:33:09.050 00:33:09.051 12:43:32 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:10.955 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:10.955 12:43:33 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:10.955 [2024-10-07 12:43:33.883126] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:33:10.955 [2024-10-07 12:43:33.883256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83586 ] 00:33:10.955 [2024-10-07 12:43:34.052356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.214 [2024-10-07 12:43:34.247555] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.473 [2024-10-07 12:43:34.597244] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:11.473 [2024-10-07 12:43:34.597306] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:11.473 [2024-10-07 12:43:34.757075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.473 [2024-10-07 12:43:34.757120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:11.473 [2024-10-07 12:43:34.757135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:11.473 [2024-10-07 12:43:34.757148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.473 [2024-10-07 12:43:34.757194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.473 [2024-10-07 12:43:34.757206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:11.473 [2024-10-07 12:43:34.757215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:11.473 [2024-10-07 12:43:34.757225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.473 [2024-10-07 12:43:34.757244] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:11.473 [2024-10-07 12:43:34.758140] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:11.473 [2024-10-07 12:43:34.758162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.473 [2024-10-07 12:43:34.758172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:11.473 [2024-10-07 12:43:34.758183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:33:11.473 [2024-10-07 12:43:34.758193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.474 [2024-10-07 12:43:34.758488] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:11.474 [2024-10-07 12:43:34.758508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.474 [2024-10-07 12:43:34.758519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:11.474 [2024-10-07 12:43:34.758530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:11.474 [2024-10-07 12:43:34.758539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.474 [2024-10-07 12:43:34.758610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.474 [2024-10-07 12:43:34.758622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:11.474 [2024-10-07 12:43:34.758632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:11.474 [2024-10-07 12:43:34.758645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.474 [2024-10-07 12:43:34.759076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.474 [2024-10-07 12:43:34.759091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:11.474 [2024-10-07 12:43:34.759101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:33:11.474 [2024-10-07 12:43:34.759111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.474 [2024-10-07 12:43:34.759191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.474 [2024-10-07 12:43:34.759204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:11.474 [2024-10-07 12:43:34.759216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:33:11.474 [2024-10-07 12:43:34.759226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.474 [2024-10-07 12:43:34.759248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.474 [2024-10-07 12:43:34.759258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:11.474 [2024-10-07 12:43:34.759267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:11.474 [2024-10-07 12:43:34.759277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.474 [2024-10-07 12:43:34.759297] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:11.734 [2024-10-07 12:43:34.764031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.734 [2024-10-07 12:43:34.764063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:11.734 [2024-10-07 12:43:34.764074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.745 ms 00:33:11.734 [2024-10-07 12:43:34.764084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.734 [2024-10-07 12:43:34.764112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.734 [2024-10-07 12:43:34.764122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:11.734 [2024-10-07 12:43:34.764136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:11.734 [2024-10-07 12:43:34.764145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.734 [2024-10-07 12:43:34.764197] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:11.734 [2024-10-07 12:43:34.764221] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:11.734 [2024-10-07 12:43:34.764254] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:11.734 [2024-10-07 12:43:34.764270] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:11.734 [2024-10-07 12:43:34.764353] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:11.734 [2024-10-07 12:43:34.764369] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:11.734 [2024-10-07 12:43:34.764382] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:11.734 [2024-10-07 12:43:34.764394] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:11.734 [2024-10-07 12:43:34.764405] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:11.734 [2024-10-07 12:43:34.764416] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:11.734 [2024-10-07 12:43:34.764426] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:11.735 [2024-10-07 12:43:34.764435] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:11.735 [2024-10-07 12:43:34.764445] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:11.735 [2024-10-07 12:43:34.764454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.735 [2024-10-07 12:43:34.764473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:11.735 [2024-10-07 12:43:34.764483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:33:11.735 [2024-10-07 12:43:34.764496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.735 [2024-10-07 12:43:34.764563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.735 [2024-10-07 12:43:34.764573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:11.735 [2024-10-07 12:43:34.764583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:11.735 [2024-10-07 12:43:34.764593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.735 [2024-10-07 12:43:34.764681] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:11.735 [2024-10-07 12:43:34.764695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:11.735 [2024-10-07 12:43:34.764705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:11.735 [2024-10-07 12:43:34.764715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.735 [2024-10-07 12:43:34.764729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:11.735 [2024-10-07 12:43:34.764738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:11.735 [2024-10-07 12:43:34.764747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:11.735 [2024-10-07 12:43:34.764757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:11.735 [2024-10-07 12:43:34.764766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:11.735 [2024-10-07 12:43:34.764775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:11.735 [2024-10-07 12:43:34.764785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:11.735 [2024-10-07 12:43:34.764794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:11.735 [2024-10-07 12:43:34.764803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:11.735 [2024-10-07 12:43:34.764812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:11.735 [2024-10-07 12:43:34.764821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:11.735 [2024-10-07 12:43:34.764838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.735 [2024-10-07 12:43:34.764847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:11.735 [2024-10-07 12:43:34.764856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:11.735 [2024-10-07 12:43:34.764865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.735 [2024-10-07 12:43:34.764874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:11.735 [2024-10-07 12:43:34.764883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:11.735 [2024-10-07 12:43:34.764892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:11.735 [2024-10-07 12:43:34.764917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:11.735 [2024-10-07 12:43:34.764926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:11.735 [2024-10-07 12:43:34.764935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:11.735 [2024-10-07 12:43:34.764944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:11.735 [2024-10-07 12:43:34.764952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:11.735 [2024-10-07 12:43:34.764961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:11.735 [2024-10-07 12:43:34.764970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:11.735 [2024-10-07 12:43:34.764979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:11.735 [2024-10-07 12:43:34.764988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:11.735 [2024-10-07 12:43:34.764997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:11.735 [2024-10-07 12:43:34.765005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:11.735 [2024-10-07 12:43:34.765014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:11.735 [2024-10-07 12:43:34.765023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:11.735 [2024-10-07 12:43:34.765032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:11.735 [2024-10-07 12:43:34.765041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:11.735 [2024-10-07 12:43:34.765050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:11.735 [2024-10-07 12:43:34.765068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:11.735 [2024-10-07 12:43:34.765076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.735 [2024-10-07 12:43:34.765085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:11.735 [2024-10-07 12:43:34.765092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:11.735 [2024-10-07 12:43:34.765103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.735 [2024-10-07 12:43:34.765112] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:11.735 [2024-10-07 12:43:34.765121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:11.735 [2024-10-07 12:43:34.765130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:11.735 [2024-10-07 12:43:34.765139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.735 [2024-10-07 12:43:34.765148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:11.735 [2024-10-07 12:43:34.765157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:11.735 [2024-10-07 12:43:34.765165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:11.735 [2024-10-07 12:43:34.765174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:11.735 [2024-10-07 12:43:34.765182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:11.735 [2024-10-07 12:43:34.765190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:11.735 [2024-10-07 12:43:34.765200] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:11.735 [2024-10-07 12:43:34.765211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:11.735 [2024-10-07 12:43:34.765222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:11.735 [2024-10-07 12:43:34.765232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:11.735 [2024-10-07 12:43:34.765241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:11.735 [2024-10-07 12:43:34.765251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:11.735 [2024-10-07 12:43:34.765260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:11.735 [2024-10-07 12:43:34.765269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:11.735 [2024-10-07 12:43:34.765278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:11.735 [2024-10-07 12:43:34.765288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:11.735 [2024-10-07 12:43:34.765297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:11.735 [2024-10-07 12:43:34.765306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:11.735 [2024-10-07 12:43:34.765315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:11.735 [2024-10-07 12:43:34.765324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:11.735 [2024-10-07 12:43:34.765333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:11.735 [2024-10-07 12:43:34.765343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:11.735 [2024-10-07 12:43:34.765352] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:11.735 [2024-10-07 12:43:34.765361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:11.735 [2024-10-07 12:43:34.765372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:11.735 [2024-10-07 12:43:34.765382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:11.735 [2024-10-07 12:43:34.765392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:11.735 [2024-10-07 12:43:34.765402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:11.735 [2024-10-07 12:43:34.765413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.735 [2024-10-07 12:43:34.765423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:11.735 [2024-10-07 12:43:34.765435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:33:11.735 [2024-10-07 12:43:34.765444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.735 [2024-10-07 12:43:34.824728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.735 [2024-10-07 12:43:34.824891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:11.735 [2024-10-07 12:43:34.825026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.342 ms 00:33:11.735 [2024-10-07 12:43:34.825067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.735 [2024-10-07 12:43:34.825179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.735 [2024-10-07 12:43:34.825271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:11.735 [2024-10-07 12:43:34.825316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:33:11.735 [2024-10-07 12:43:34.825348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.735 [2024-10-07 12:43:34.867715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.735 [2024-10-07 12:43:34.867856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:11.735 [2024-10-07 12:43:34.867963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.313 ms 00:33:11.735 [2024-10-07 12:43:34.868010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.735 [2024-10-07 12:43:34.868063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.735 [2024-10-07 12:43:34.868096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:11.735 [2024-10-07 12:43:34.868173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:11.735 [2024-10-07 12:43:34.868208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.735 [2024-10-07 12:43:34.868361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.735 [2024-10-07 12:43:34.868480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:11.736 [2024-10-07 12:43:34.868512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:11.736 [2024-10-07 12:43:34.868588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.868727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.868876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:11.736 [2024-10-07 12:43:34.868924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:33:11.736 [2024-10-07 12:43:34.868955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.886388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.886522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:11.736 [2024-10-07 12:43:34.886611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.415 ms 00:33:11.736 [2024-10-07 12:43:34.886647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.886794] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:11.736 [2024-10-07 12:43:34.886855] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:11.736 [2024-10-07 12:43:34.886986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.887020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:11.736 [2024-10-07 12:43:34.887050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:33:11.736 [2024-10-07 12:43:34.887080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.897962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.898108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:11.736 [2024-10-07 12:43:34.898228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.858 ms 00:33:11.736 [2024-10-07 12:43:34.898271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.898401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.898435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:11.736 [2024-10-07 12:43:34.898540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:33:11.736 [2024-10-07 12:43:34.898575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.898650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.898685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:11.736 [2024-10-07 12:43:34.898778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:11.736 [2024-10-07 12:43:34.898848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.899626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.899745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:11.736 [2024-10-07 12:43:34.899819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:33:11.736 [2024-10-07 12:43:34.899854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.899896] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:11.736 [2024-10-07 12:43:34.900086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.900116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:11.736 [2024-10-07 12:43:34.900146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:33:11.736 [2024-10-07 12:43:34.900174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.912040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:11.736 [2024-10-07 12:43:34.912332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.912380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:11.736 [2024-10-07 12:43:34.912455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.135 ms 00:33:11.736 [2024-10-07 12:43:34.912548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.914365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.914490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:11.736 [2024-10-07 12:43:34.914575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.765 ms 00:33:11.736 [2024-10-07 12:43:34.914609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.914716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.914909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:11.736 [2024-10-07 12:43:34.914960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:11.736 [2024-10-07 12:43:34.914992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.915043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.915074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:11.736 [2024-10-07 12:43:34.915179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:11.736 [2024-10-07 12:43:34.915254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.915308] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:11.736 [2024-10-07 12:43:34.915341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.915370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:11.736 [2024-10-07 12:43:34.915406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:11.736 [2024-10-07 12:43:34.915435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.950371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.950512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:11.736 [2024-10-07 12:43:34.950630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.885 ms 00:33:11.736 [2024-10-07 12:43:34.950666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.950755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.736 [2024-10-07 12:43:34.950799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:11.736 [2024-10-07 12:43:34.950812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:11.736 [2024-10-07 12:43:34.950822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.736 [2024-10-07 12:43:34.951715] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 194.545 ms, result 0 00:33:12.675  [2024-10-07T12:43:37.345Z] Copying: 23/1024 [MB] (23 MBps) [2024-10-07T12:43:38.283Z] Copying: 48/1024 [MB] (24 MBps) [2024-10-07T12:43:39.219Z] Copying: 72/1024 [MB] (23 MBps) [2024-10-07T12:43:40.154Z] Copying: 96/1024 [MB] (23 MBps) [2024-10-07T12:43:41.090Z] Copying: 120/1024 [MB] (23 MBps) [2024-10-07T12:43:42.028Z] Copying: 144/1024 [MB] (24 MBps) [2024-10-07T12:43:42.967Z] Copying: 169/1024 [MB] (24 MBps) [2024-10-07T12:43:44.374Z] Copying: 193/1024 [MB] (24 MBps) [2024-10-07T12:43:44.956Z] Copying: 218/1024 [MB] (24 MBps) [2024-10-07T12:43:46.335Z] Copying: 243/1024 [MB] (24 MBps) [2024-10-07T12:43:47.273Z] Copying: 267/1024 [MB] (24 MBps) [2024-10-07T12:43:48.210Z] Copying: 291/1024 [MB] (23 MBps) [2024-10-07T12:43:49.148Z] Copying: 315/1024 [MB] (24 MBps) [2024-10-07T12:43:50.086Z] Copying: 339/1024 [MB] (23 MBps) [2024-10-07T12:43:51.023Z] Copying: 363/1024 [MB] (24 MBps) [2024-10-07T12:43:51.960Z] Copying: 388/1024 [MB] (24 MBps) [2024-10-07T12:43:53.337Z] Copying: 412/1024 [MB] (23 MBps) [2024-10-07T12:43:54.275Z] Copying: 436/1024 [MB] (23 MBps) [2024-10-07T12:43:55.212Z] Copying: 460/1024 [MB] (24 MBps) [2024-10-07T12:43:56.149Z] Copying: 484/1024 [MB] (24 MBps) [2024-10-07T12:43:57.085Z] Copying: 509/1024 [MB] (25 MBps) [2024-10-07T12:43:58.022Z] Copying: 533/1024 [MB] (23 MBps) [2024-10-07T12:43:58.960Z] Copying: 556/1024 [MB] (23 MBps) [2024-10-07T12:44:00.339Z] Copying: 579/1024 [MB] (23 MBps) [2024-10-07T12:44:01.276Z] Copying: 604/1024 [MB] (24 MBps) [2024-10-07T12:44:02.213Z] Copying: 626/1024 [MB] (22 MBps) [2024-10-07T12:44:03.150Z] Copying: 650/1024 [MB] (23 MBps) [2024-10-07T12:44:04.088Z] Copying: 673/1024 [MB] (22 MBps) [2024-10-07T12:44:05.050Z] Copying: 696/1024 [MB] (23 MBps) [2024-10-07T12:44:05.987Z] Copying: 721/1024 [MB] (24 MBps) [2024-10-07T12:44:06.925Z] Copying: 745/1024 [MB] (24 MBps) [2024-10-07T12:44:08.303Z] Copying: 772/1024 [MB] (26 MBps) [2024-10-07T12:44:09.240Z] Copying: 796/1024 [MB] (24 MBps) [2024-10-07T12:44:10.177Z] Copying: 818/1024 [MB] (22 MBps) [2024-10-07T12:44:11.115Z] Copying: 841/1024 [MB] (23 MBps) [2024-10-07T12:44:12.052Z] Copying: 866/1024 [MB] (24 MBps) [2024-10-07T12:44:12.988Z] Copying: 890/1024 [MB] (24 MBps) [2024-10-07T12:44:13.925Z] Copying: 914/1024 [MB] (23 MBps) [2024-10-07T12:44:15.304Z] Copying: 937/1024 [MB] (23 MBps) [2024-10-07T12:44:16.241Z] Copying: 961/1024 [MB] (23 MBps) [2024-10-07T12:44:17.177Z] Copying: 985/1024 [MB] (24 MBps) [2024-10-07T12:44:18.115Z] Copying: 1009/1024 [MB] (24 MBps) [2024-10-07T12:44:18.374Z] Copying: 1023/1024 [MB] (14 MBps) [2024-10-07T12:44:18.635Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-10-07 12:44:18.380628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.344 [2024-10-07 12:44:18.380686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:55.344 [2024-10-07 12:44:18.380702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:55.344 [2024-10-07 12:44:18.380716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.344 [2024-10-07 12:44:18.381881] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:55.344 [2024-10-07 12:44:18.387515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.344 [2024-10-07 12:44:18.387550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:55.344 [2024-10-07 12:44:18.387563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.599 ms 00:33:55.344 [2024-10-07 12:44:18.387573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.344 [2024-10-07 12:44:18.396207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.344 [2024-10-07 12:44:18.396242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:55.344 [2024-10-07 12:44:18.396254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.259 ms 00:33:55.344 [2024-10-07 12:44:18.396264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.344 [2024-10-07 12:44:18.396290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.344 [2024-10-07 12:44:18.396301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:55.344 [2024-10-07 12:44:18.396316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:55.344 [2024-10-07 12:44:18.396326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.344 [2024-10-07 12:44:18.396370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.344 [2024-10-07 12:44:18.396381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:55.344 [2024-10-07 12:44:18.396390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:55.344 [2024-10-07 12:44:18.396399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.344 [2024-10-07 12:44:18.396413] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:55.344 [2024-10-07 12:44:18.396425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127744 / 261120 wr_cnt: 1 state: open 00:33:55.344 [2024-10-07 12:44:18.396436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.396993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.397004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.397014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.397024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.397035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.397045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.397055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.397066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.397076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:55.344 [2024-10-07 12:44:18.397086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:55.345 [2024-10-07 12:44:18.397490] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:55.345 [2024-10-07 12:44:18.397499] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbbdbf03-3353-47c3-8ed4-2dbb0b74638e 00:33:55.345 [2024-10-07 12:44:18.397514] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127744 00:33:55.345 [2024-10-07 12:44:18.397523] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127776 00:33:55.345 [2024-10-07 12:44:18.397533] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127744 00:33:55.345 [2024-10-07 12:44:18.397543] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:33:55.345 [2024-10-07 12:44:18.397553] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:55.345 [2024-10-07 12:44:18.397563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:55.345 [2024-10-07 12:44:18.397573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:55.345 [2024-10-07 12:44:18.397582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:55.345 [2024-10-07 12:44:18.397591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:55.345 [2024-10-07 12:44:18.397600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.345 [2024-10-07 12:44:18.397610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:55.345 [2024-10-07 12:44:18.397619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:33:55.345 [2024-10-07 12:44:18.397629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.345 [2024-10-07 12:44:18.416456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.345 [2024-10-07 12:44:18.416485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:55.345 [2024-10-07 12:44:18.416497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.839 ms 00:33:55.345 [2024-10-07 12:44:18.416506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.345 [2024-10-07 12:44:18.417080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.345 [2024-10-07 12:44:18.417096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:55.345 [2024-10-07 12:44:18.417106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:33:55.345 [2024-10-07 12:44:18.417121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.345 [2024-10-07 12:44:18.461599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.345 [2024-10-07 12:44:18.461631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:55.345 [2024-10-07 12:44:18.461643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.345 [2024-10-07 12:44:18.461654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.345 [2024-10-07 12:44:18.461703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.345 [2024-10-07 12:44:18.461714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:55.345 [2024-10-07 12:44:18.461724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.345 [2024-10-07 12:44:18.461738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.345 [2024-10-07 12:44:18.461789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.345 [2024-10-07 12:44:18.461802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:55.345 [2024-10-07 12:44:18.461812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.345 [2024-10-07 12:44:18.461822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.345 [2024-10-07 12:44:18.461838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.345 [2024-10-07 12:44:18.461849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:55.345 [2024-10-07 12:44:18.461859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.345 [2024-10-07 12:44:18.461868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.345 [2024-10-07 12:44:18.579774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.345 [2024-10-07 12:44:18.579821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:55.345 [2024-10-07 12:44:18.579858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.345 [2024-10-07 12:44:18.579868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.605 [2024-10-07 12:44:18.674680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.605 [2024-10-07 12:44:18.674724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:55.605 [2024-10-07 12:44:18.674737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.605 [2024-10-07 12:44:18.674753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.605 [2024-10-07 12:44:18.674840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.605 [2024-10-07 12:44:18.674852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:55.605 [2024-10-07 12:44:18.674861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.605 [2024-10-07 12:44:18.674871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.605 [2024-10-07 12:44:18.674922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.605 [2024-10-07 12:44:18.674934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:55.605 [2024-10-07 12:44:18.674966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.605 [2024-10-07 12:44:18.674976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.605 [2024-10-07 12:44:18.675075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.605 [2024-10-07 12:44:18.675087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:55.605 [2024-10-07 12:44:18.675098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.605 [2024-10-07 12:44:18.675107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.605 [2024-10-07 12:44:18.675133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.605 [2024-10-07 12:44:18.675145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:55.605 [2024-10-07 12:44:18.675155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.605 [2024-10-07 12:44:18.675165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.605 [2024-10-07 12:44:18.675204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.605 [2024-10-07 12:44:18.675215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:55.605 [2024-10-07 12:44:18.675225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.605 [2024-10-07 12:44:18.675234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.605 [2024-10-07 12:44:18.675275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.605 [2024-10-07 12:44:18.675286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:55.605 [2024-10-07 12:44:18.675297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.605 [2024-10-07 12:44:18.675306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.605 [2024-10-07 12:44:18.675421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 298.172 ms, result 0 00:33:57.512 00:33:57.512 00:33:57.512 12:44:20 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:57.512 [2024-10-07 12:44:20.440862] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:33:57.512 [2024-10-07 12:44:20.440999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84067 ] 00:33:57.512 [2024-10-07 12:44:20.610691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.771 [2024-10-07 12:44:20.805341] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.032 [2024-10-07 12:44:21.146002] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:58.032 [2024-10-07 12:44:21.146057] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:58.032 [2024-10-07 12:44:21.305613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.305656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:58.032 [2024-10-07 12:44:21.305672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:58.032 [2024-10-07 12:44:21.305685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.305733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.305746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:58.032 [2024-10-07 12:44:21.305757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:58.032 [2024-10-07 12:44:21.305766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.305787] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:58.032 [2024-10-07 12:44:21.306703] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:58.032 [2024-10-07 12:44:21.306730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.306741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:58.032 [2024-10-07 12:44:21.306752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:33:58.032 [2024-10-07 12:44:21.306761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.307105] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:58.032 [2024-10-07 12:44:21.307126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.307137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:58.032 [2024-10-07 12:44:21.307148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:33:58.032 [2024-10-07 12:44:21.307158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.307203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.307214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:58.032 [2024-10-07 12:44:21.307224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:58.032 [2024-10-07 12:44:21.307238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.307638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.307651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:58.032 [2024-10-07 12:44:21.307662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:33:58.032 [2024-10-07 12:44:21.307672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.307742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.307755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:58.032 [2024-10-07 12:44:21.307768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:33:58.032 [2024-10-07 12:44:21.307778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.307800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.307811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:58.032 [2024-10-07 12:44:21.307821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:58.032 [2024-10-07 12:44:21.307841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.307862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:58.032 [2024-10-07 12:44:21.313309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.313336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:58.032 [2024-10-07 12:44:21.313347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.460 ms 00:33:58.032 [2024-10-07 12:44:21.313372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.313399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.313409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:58.032 [2024-10-07 12:44:21.313423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:58.032 [2024-10-07 12:44:21.313432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.313483] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:58.032 [2024-10-07 12:44:21.313506] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:58.032 [2024-10-07 12:44:21.313539] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:58.032 [2024-10-07 12:44:21.313556] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:58.032 [2024-10-07 12:44:21.313639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:58.032 [2024-10-07 12:44:21.313655] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:58.032 [2024-10-07 12:44:21.313667] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:58.032 [2024-10-07 12:44:21.313679] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:58.032 [2024-10-07 12:44:21.313706] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:58.032 [2024-10-07 12:44:21.313717] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:58.032 [2024-10-07 12:44:21.313726] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:58.032 [2024-10-07 12:44:21.313735] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:58.032 [2024-10-07 12:44:21.313745] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:58.032 [2024-10-07 12:44:21.313755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.313764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:58.032 [2024-10-07 12:44:21.313774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:33:58.032 [2024-10-07 12:44:21.313787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.313855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.032 [2024-10-07 12:44:21.313866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:58.032 [2024-10-07 12:44:21.313876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:33:58.032 [2024-10-07 12:44:21.313885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.032 [2024-10-07 12:44:21.313991] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:58.032 [2024-10-07 12:44:21.314006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:58.032 [2024-10-07 12:44:21.314017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:58.032 [2024-10-07 12:44:21.314027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:58.032 [2024-10-07 12:44:21.314042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:58.032 [2024-10-07 12:44:21.314052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:58.032 [2024-10-07 12:44:21.314061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:58.032 [2024-10-07 12:44:21.314071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:58.032 [2024-10-07 12:44:21.314080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:58.032 [2024-10-07 12:44:21.314088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:58.032 [2024-10-07 12:44:21.314098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:58.032 [2024-10-07 12:44:21.314107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:58.032 [2024-10-07 12:44:21.314116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:58.032 [2024-10-07 12:44:21.314125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:58.032 [2024-10-07 12:44:21.314134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:58.032 [2024-10-07 12:44:21.314152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:58.032 [2024-10-07 12:44:21.314161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:58.032 [2024-10-07 12:44:21.314170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:58.032 [2024-10-07 12:44:21.314179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:58.032 [2024-10-07 12:44:21.314188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:58.032 [2024-10-07 12:44:21.314197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:58.032 [2024-10-07 12:44:21.314206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:58.032 [2024-10-07 12:44:21.314215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:58.032 [2024-10-07 12:44:21.314224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:58.032 [2024-10-07 12:44:21.314233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:58.032 [2024-10-07 12:44:21.314241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:58.032 [2024-10-07 12:44:21.314251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:58.032 [2024-10-07 12:44:21.314260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:58.032 [2024-10-07 12:44:21.314268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:58.032 [2024-10-07 12:44:21.314277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:58.032 [2024-10-07 12:44:21.314286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:58.032 [2024-10-07 12:44:21.314295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:58.032 [2024-10-07 12:44:21.314304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:58.032 [2024-10-07 12:44:21.314313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:58.032 [2024-10-07 12:44:21.314322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:58.033 [2024-10-07 12:44:21.314331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:58.033 [2024-10-07 12:44:21.314341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:58.033 [2024-10-07 12:44:21.314350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:58.033 [2024-10-07 12:44:21.314358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:58.033 [2024-10-07 12:44:21.314368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:58.033 [2024-10-07 12:44:21.314377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:58.033 [2024-10-07 12:44:21.314385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:58.033 [2024-10-07 12:44:21.314394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:58.033 [2024-10-07 12:44:21.314403] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:58.033 [2024-10-07 12:44:21.314412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:58.033 [2024-10-07 12:44:21.314421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:58.033 [2024-10-07 12:44:21.314431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:58.033 [2024-10-07 12:44:21.314440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:58.033 [2024-10-07 12:44:21.314449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:58.033 [2024-10-07 12:44:21.314458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:58.033 [2024-10-07 12:44:21.314467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:58.033 [2024-10-07 12:44:21.314475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:58.033 [2024-10-07 12:44:21.314484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:58.033 [2024-10-07 12:44:21.314494] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:58.033 [2024-10-07 12:44:21.314506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:58.033 [2024-10-07 12:44:21.314517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:58.033 [2024-10-07 12:44:21.314527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:58.033 [2024-10-07 12:44:21.314538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:58.033 [2024-10-07 12:44:21.314548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:58.033 [2024-10-07 12:44:21.314558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:58.033 [2024-10-07 12:44:21.314568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:58.033 [2024-10-07 12:44:21.314578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:58.033 [2024-10-07 12:44:21.314588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:58.033 [2024-10-07 12:44:21.314598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:58.033 [2024-10-07 12:44:21.314607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:58.033 [2024-10-07 12:44:21.314617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:58.033 [2024-10-07 12:44:21.314627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:58.033 [2024-10-07 12:44:21.314637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:58.033 [2024-10-07 12:44:21.314650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:58.033 [2024-10-07 12:44:21.314660] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:58.033 [2024-10-07 12:44:21.314670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:58.033 [2024-10-07 12:44:21.314681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:58.033 [2024-10-07 12:44:21.314691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:58.033 [2024-10-07 12:44:21.314701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:58.033 [2024-10-07 12:44:21.314711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:58.033 [2024-10-07 12:44:21.314721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.033 [2024-10-07 12:44:21.314731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:58.033 [2024-10-07 12:44:21.314744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:33:58.033 [2024-10-07 12:44:21.314754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.364261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.364298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:58.293 [2024-10-07 12:44:21.364316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.547 ms 00:33:58.293 [2024-10-07 12:44:21.364327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.364408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.364421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:58.293 [2024-10-07 12:44:21.364435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:58.293 [2024-10-07 12:44:21.364446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.409295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.409325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:58.293 [2024-10-07 12:44:21.409337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.864 ms 00:33:58.293 [2024-10-07 12:44:21.409347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.409379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.409389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:58.293 [2024-10-07 12:44:21.409399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:58.293 [2024-10-07 12:44:21.409410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.409526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.409539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:58.293 [2024-10-07 12:44:21.409549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:33:58.293 [2024-10-07 12:44:21.409558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.409673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.409686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:58.293 [2024-10-07 12:44:21.409712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:58.293 [2024-10-07 12:44:21.409722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.427388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.427416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:58.293 [2024-10-07 12:44:21.427429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.674 ms 00:33:58.293 [2024-10-07 12:44:21.427454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.427576] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:58.293 [2024-10-07 12:44:21.427593] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:58.293 [2024-10-07 12:44:21.427606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.427616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:58.293 [2024-10-07 12:44:21.427626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:33:58.293 [2024-10-07 12:44:21.427636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.438121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.438146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:58.293 [2024-10-07 12:44:21.438157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.486 ms 00:33:58.293 [2024-10-07 12:44:21.438170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.438271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.438281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:58.293 [2024-10-07 12:44:21.438291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:33:58.293 [2024-10-07 12:44:21.438300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.438345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.438357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:58.293 [2024-10-07 12:44:21.438366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:58.293 [2024-10-07 12:44:21.438375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.439065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.439096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:58.293 [2024-10-07 12:44:21.439108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:33:58.293 [2024-10-07 12:44:21.439117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.439136] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:58.293 [2024-10-07 12:44:21.439148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.439158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:58.293 [2024-10-07 12:44:21.439169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:58.293 [2024-10-07 12:44:21.439179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.450137] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:58.293 [2024-10-07 12:44:21.450307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.450322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:58.293 [2024-10-07 12:44:21.450333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.126 ms 00:33:58.293 [2024-10-07 12:44:21.450343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.452214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.452240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:58.293 [2024-10-07 12:44:21.452251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:33:58.293 [2024-10-07 12:44:21.452261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.452330] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:58.293 [2024-10-07 12:44:21.452704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.452715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:58.293 [2024-10-07 12:44:21.452726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:33:58.293 [2024-10-07 12:44:21.452735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.452760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.452771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:58.293 [2024-10-07 12:44:21.452781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:58.293 [2024-10-07 12:44:21.452790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.452822] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:58.293 [2024-10-07 12:44:21.452833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.452846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:58.293 [2024-10-07 12:44:21.452856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:58.293 [2024-10-07 12:44:21.452866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.293 [2024-10-07 12:44:21.488780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.293 [2024-10-07 12:44:21.488825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:58.294 [2024-10-07 12:44:21.488839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.953 ms 00:33:58.294 [2024-10-07 12:44:21.488849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.294 [2024-10-07 12:44:21.488941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:58.294 [2024-10-07 12:44:21.488954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:58.294 [2024-10-07 12:44:21.488964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:58.294 [2024-10-07 12:44:21.488973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:58.294 [2024-10-07 12:44:21.490092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 184.326 ms, result 0 00:33:59.670  [2024-10-07T12:44:23.898Z] Copying: 25/1024 [MB] (25 MBps) [2024-10-07T12:44:24.865Z] Copying: 51/1024 [MB] (25 MBps) [2024-10-07T12:44:25.799Z] Copying: 76/1024 [MB] (25 MBps) [2024-10-07T12:44:26.736Z] Copying: 101/1024 [MB] (25 MBps) [2024-10-07T12:44:28.115Z] Copying: 125/1024 [MB] (24 MBps) [2024-10-07T12:44:29.052Z] Copying: 151/1024 [MB] (25 MBps) [2024-10-07T12:44:29.989Z] Copying: 176/1024 [MB] (25 MBps) [2024-10-07T12:44:30.928Z] Copying: 202/1024 [MB] (25 MBps) [2024-10-07T12:44:31.862Z] Copying: 228/1024 [MB] (26 MBps) [2024-10-07T12:44:32.800Z] Copying: 254/1024 [MB] (25 MBps) [2024-10-07T12:44:33.738Z] Copying: 280/1024 [MB] (25 MBps) [2024-10-07T12:44:34.700Z] Copying: 305/1024 [MB] (25 MBps) [2024-10-07T12:44:36.077Z] Copying: 331/1024 [MB] (25 MBps) [2024-10-07T12:44:37.015Z] Copying: 357/1024 [MB] (25 MBps) [2024-10-07T12:44:37.953Z] Copying: 383/1024 [MB] (26 MBps) [2024-10-07T12:44:38.895Z] Copying: 408/1024 [MB] (25 MBps) [2024-10-07T12:44:39.834Z] Copying: 433/1024 [MB] (24 MBps) [2024-10-07T12:44:40.772Z] Copying: 458/1024 [MB] (25 MBps) [2024-10-07T12:44:41.708Z] Copying: 484/1024 [MB] (25 MBps) [2024-10-07T12:44:43.087Z] Copying: 509/1024 [MB] (25 MBps) [2024-10-07T12:44:44.059Z] Copying: 534/1024 [MB] (24 MBps) [2024-10-07T12:44:45.008Z] Copying: 559/1024 [MB] (25 MBps) [2024-10-07T12:44:45.945Z] Copying: 584/1024 [MB] (25 MBps) [2024-10-07T12:44:46.883Z] Copying: 609/1024 [MB] (24 MBps) [2024-10-07T12:44:47.820Z] Copying: 634/1024 [MB] (25 MBps) [2024-10-07T12:44:48.757Z] Copying: 659/1024 [MB] (25 MBps) [2024-10-07T12:44:49.695Z] Copying: 685/1024 [MB] (25 MBps) [2024-10-07T12:44:51.074Z] Copying: 709/1024 [MB] (24 MBps) [2024-10-07T12:44:52.012Z] Copying: 734/1024 [MB] (24 MBps) [2024-10-07T12:44:52.950Z] Copying: 759/1024 [MB] (25 MBps) [2024-10-07T12:44:53.890Z] Copying: 783/1024 [MB] (23 MBps) [2024-10-07T12:44:54.827Z] Copying: 807/1024 [MB] (23 MBps) [2024-10-07T12:44:55.765Z] Copying: 830/1024 [MB] (23 MBps) [2024-10-07T12:44:56.701Z] Copying: 855/1024 [MB] (24 MBps) [2024-10-07T12:44:58.079Z] Copying: 880/1024 [MB] (25 MBps) [2024-10-07T12:44:59.014Z] Copying: 905/1024 [MB] (24 MBps) [2024-10-07T12:44:59.951Z] Copying: 929/1024 [MB] (24 MBps) [2024-10-07T12:45:00.888Z] Copying: 955/1024 [MB] (25 MBps) [2024-10-07T12:45:01.825Z] Copying: 980/1024 [MB] (25 MBps) [2024-10-07T12:45:02.392Z] Copying: 1006/1024 [MB] (25 MBps) [2024-10-07T12:45:02.652Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-07 12:45:02.552338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:39.361 [2024-10-07 12:45:02.552448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:39.361 [2024-10-07 12:45:02.552483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:39.361 [2024-10-07 12:45:02.552510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.361 [2024-10-07 12:45:02.552560] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:39.361 [2024-10-07 12:45:02.560017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:39.361 [2024-10-07 12:45:02.560089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:39.361 [2024-10-07 12:45:02.560117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.419 ms 00:34:39.361 [2024-10-07 12:45:02.560139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.361 [2024-10-07 12:45:02.560506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:39.361 [2024-10-07 12:45:02.560542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:39.361 [2024-10-07 12:45:02.560567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:34:39.361 [2024-10-07 12:45:02.560589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.361 [2024-10-07 12:45:02.560685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:39.361 [2024-10-07 12:45:02.560734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:39.361 [2024-10-07 12:45:02.560757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:34:39.361 [2024-10-07 12:45:02.560779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.361 [2024-10-07 12:45:02.560861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:39.361 [2024-10-07 12:45:02.560891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:39.361 [2024-10-07 12:45:02.560944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:34:39.361 [2024-10-07 12:45:02.560966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.361 [2024-10-07 12:45:02.560998] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:39.361 [2024-10-07 12:45:02.561026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:39.361 [2024-10-07 12:45:02.561053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:39.361 [2024-10-07 12:45:02.561499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.561991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.562987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:39.362 [2024-10-07 12:45:02.563497] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:39.362 [2024-10-07 12:45:02.563525] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dbbdbf03-3353-47c3-8ed4-2dbb0b74638e 00:34:39.362 [2024-10-07 12:45:02.563549] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:39.362 [2024-10-07 12:45:02.563571] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3360 00:34:39.362 [2024-10-07 12:45:02.563592] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3328 00:34:39.362 [2024-10-07 12:45:02.563617] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:34:39.362 [2024-10-07 12:45:02.563641] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:39.362 [2024-10-07 12:45:02.563663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:39.362 [2024-10-07 12:45:02.563685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:39.362 [2024-10-07 12:45:02.563705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:39.362 [2024-10-07 12:45:02.563726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:39.362 [2024-10-07 12:45:02.563747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:39.362 [2024-10-07 12:45:02.563770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:39.362 [2024-10-07 12:45:02.563792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.754 ms 00:34:39.362 [2024-10-07 12:45:02.563814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.362 [2024-10-07 12:45:02.584396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:39.362 [2024-10-07 12:45:02.584449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:39.363 [2024-10-07 12:45:02.584483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.566 ms 00:34:39.363 [2024-10-07 12:45:02.584495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.363 [2024-10-07 12:45:02.585044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:39.363 [2024-10-07 12:45:02.585068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:39.363 [2024-10-07 12:45:02.585090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:34:39.363 [2024-10-07 12:45:02.585112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.363 [2024-10-07 12:45:02.624976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.363 [2024-10-07 12:45:02.625019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:39.363 [2024-10-07 12:45:02.625051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.363 [2024-10-07 12:45:02.625064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.363 [2024-10-07 12:45:02.625133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.363 [2024-10-07 12:45:02.625147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:39.363 [2024-10-07 12:45:02.625166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.363 [2024-10-07 12:45:02.625179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.363 [2024-10-07 12:45:02.625237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.363 [2024-10-07 12:45:02.625253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:39.363 [2024-10-07 12:45:02.625266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.363 [2024-10-07 12:45:02.625278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.363 [2024-10-07 12:45:02.625300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.363 [2024-10-07 12:45:02.625314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:39.363 [2024-10-07 12:45:02.625327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.363 [2024-10-07 12:45:02.625344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.622 [2024-10-07 12:45:02.742630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.622 [2024-10-07 12:45:02.742689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:39.622 [2024-10-07 12:45:02.742703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.622 [2024-10-07 12:45:02.742713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.622 [2024-10-07 12:45:02.841589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.622 [2024-10-07 12:45:02.841635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:39.622 [2024-10-07 12:45:02.841654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.622 [2024-10-07 12:45:02.841664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.622 [2024-10-07 12:45:02.841765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.622 [2024-10-07 12:45:02.841778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:39.622 [2024-10-07 12:45:02.841789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.622 [2024-10-07 12:45:02.841798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.622 [2024-10-07 12:45:02.841834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.622 [2024-10-07 12:45:02.841844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:39.622 [2024-10-07 12:45:02.841855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.622 [2024-10-07 12:45:02.841864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.622 [2024-10-07 12:45:02.841957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.622 [2024-10-07 12:45:02.841970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:39.622 [2024-10-07 12:45:02.841980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.622 [2024-10-07 12:45:02.841990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.622 [2024-10-07 12:45:02.842034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.622 [2024-10-07 12:45:02.842046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:39.622 [2024-10-07 12:45:02.842056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.622 [2024-10-07 12:45:02.842066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.622 [2024-10-07 12:45:02.842107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.622 [2024-10-07 12:45:02.842119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:39.622 [2024-10-07 12:45:02.842129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.622 [2024-10-07 12:45:02.842139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.622 [2024-10-07 12:45:02.842180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.622 [2024-10-07 12:45:02.842192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:39.622 [2024-10-07 12:45:02.842202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.622 [2024-10-07 12:45:02.842211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.622 [2024-10-07 12:45:02.842334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 290.450 ms, result 0 00:34:41.045 00:34:41.045 00:34:41.045 12:45:03 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:42.422 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:42.422 12:45:05 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:42.422 12:45:05 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:42.422 12:45:05 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:42.680 Process with pid 82405 is not found 00:34:42.680 Remove shared memory files 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 82405 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 82405 ']' 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 82405 00:34:42.680 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (82405) - No such process 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 82405 is not found' 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_band_md /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_l2p_l1 /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_l2p_l2 /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_l2p_l2_ctx /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_nvc_md /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_p2l_pool /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_sb /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_sb_shm /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_trim_bitmap /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_trim_log /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_trim_md /dev/hugepages/ftl_dbbdbf03-3353-47c3-8ed4-2dbb0b74638e_vmap 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:42.680 00:34:42.680 real 3m27.392s 00:34:42.680 user 3m15.096s 00:34:42.680 sys 0m13.660s 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:42.680 ************************************ 00:34:42.680 12:45:05 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:42.680 END TEST ftl_restore_fast 00:34:42.680 ************************************ 00:34:42.680 12:45:05 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:42.680 12:45:05 ftl -- ftl/ftl.sh@14 -- # killprocess 74358 00:34:42.680 12:45:05 ftl -- common/autotest_common.sh@950 -- # '[' -z 74358 ']' 00:34:42.680 12:45:05 ftl -- common/autotest_common.sh@954 -- # kill -0 74358 00:34:42.680 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74358) - No such process 00:34:42.680 Process with pid 74358 is not found 00:34:42.680 12:45:05 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 74358 is not found' 00:34:42.680 12:45:05 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:42.680 12:45:05 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84543 00:34:42.680 12:45:05 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:42.680 12:45:05 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84543 00:34:42.680 12:45:05 ftl -- common/autotest_common.sh@831 -- # '[' -z 84543 ']' 00:34:42.680 12:45:05 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.680 12:45:05 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:42.680 12:45:05 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.681 12:45:05 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:42.681 12:45:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:42.681 [2024-10-07 12:45:05.914282] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:34:42.681 [2024-10-07 12:45:05.914815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84543 ] 00:34:42.940 [2024-10-07 12:45:06.086557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.199 [2024-10-07 12:45:06.272392] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.135 12:45:07 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:44.135 12:45:07 ftl -- common/autotest_common.sh@864 -- # return 0 00:34:44.135 12:45:07 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:44.135 nvme0n1 00:34:44.135 12:45:07 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:44.135 12:45:07 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:44.135 12:45:07 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:44.393 12:45:07 ftl -- ftl/common.sh@28 -- # stores=9a887006-011c-46cc-b372-84ab6575a018 00:34:44.393 12:45:07 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:44.393 12:45:07 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9a887006-011c-46cc-b372-84ab6575a018 00:34:44.653 12:45:07 ftl -- ftl/ftl.sh@23 -- # killprocess 84543 00:34:44.653 12:45:07 ftl -- common/autotest_common.sh@950 -- # '[' -z 84543 ']' 00:34:44.653 12:45:07 ftl -- common/autotest_common.sh@954 -- # kill -0 84543 00:34:44.653 12:45:07 ftl -- common/autotest_common.sh@955 -- # uname 00:34:44.653 12:45:07 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:44.653 12:45:07 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84543 00:34:44.653 12:45:07 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:44.653 12:45:07 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:44.653 killing process with pid 84543 00:34:44.653 12:45:07 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84543' 00:34:44.653 12:45:07 ftl -- common/autotest_common.sh@969 -- # kill 84543 00:34:44.653 12:45:07 ftl -- common/autotest_common.sh@974 -- # wait 84543 00:34:47.188 12:45:10 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:47.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:47.447 Waiting for block devices as requested 00:34:47.447 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:47.706 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:47.706 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:47.965 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:53.241 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:53.241 12:45:16 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:53.241 Remove shared memory files 00:34:53.241 12:45:16 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:53.241 12:45:16 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:53.241 12:45:16 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:53.241 12:45:16 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:53.241 12:45:16 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:53.241 12:45:16 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:53.241 00:34:53.241 real 15m12.287s 00:34:53.241 user 17m34.549s 00:34:53.241 sys 1m47.903s 00:34:53.241 12:45:16 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:53.241 12:45:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:53.241 ************************************ 00:34:53.241 END TEST ftl 00:34:53.241 ************************************ 00:34:53.241 12:45:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:53.241 12:45:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:53.241 12:45:16 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:53.241 12:45:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:53.241 12:45:16 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:34:53.241 12:45:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:53.241 12:45:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:53.241 12:45:16 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:34:53.241 12:45:16 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:34:53.241 12:45:16 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:34:53.241 12:45:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:53.241 12:45:16 -- common/autotest_common.sh@10 -- # set +x 00:34:53.241 12:45:16 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:34:53.241 12:45:16 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:53.241 12:45:16 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:53.241 12:45:16 -- common/autotest_common.sh@10 -- # set +x 00:34:55.776 INFO: APP EXITING 00:34:55.776 INFO: killing all VMs 00:34:55.776 INFO: killing vhost app 00:34:55.776 INFO: EXIT DONE 00:34:55.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:56.344 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:56.344 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:56.344 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:56.344 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:56.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:57.172 Cleaning 00:34:57.172 Removing: /var/run/dpdk/spdk0/config 00:34:57.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:57.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:57.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:57.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:57.431 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:57.431 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:57.431 Removing: /var/run/dpdk/spdk0 00:34:57.431 Removing: /var/run/dpdk/spdk_pid57842 00:34:57.431 Removing: /var/run/dpdk/spdk_pid58088 00:34:57.431 Removing: /var/run/dpdk/spdk_pid58328 00:34:57.431 Removing: /var/run/dpdk/spdk_pid58432 00:34:57.431 Removing: /var/run/dpdk/spdk_pid58488 00:34:57.431 Removing: /var/run/dpdk/spdk_pid58616 00:34:57.431 Removing: /var/run/dpdk/spdk_pid58644 00:34:57.431 Removing: /var/run/dpdk/spdk_pid58855 00:34:57.431 Removing: /var/run/dpdk/spdk_pid58968 00:34:57.431 Removing: /var/run/dpdk/spdk_pid59075 00:34:57.431 Removing: /var/run/dpdk/spdk_pid59202 00:34:57.431 Removing: /var/run/dpdk/spdk_pid59310 00:34:57.431 Removing: /var/run/dpdk/spdk_pid59355 00:34:57.431 Removing: /var/run/dpdk/spdk_pid59392 00:34:57.431 Removing: /var/run/dpdk/spdk_pid59468 00:34:57.431 Removing: /var/run/dpdk/spdk_pid59600 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60049 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60137 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60212 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60228 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60389 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60405 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60566 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60587 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60657 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60681 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60745 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60763 00:34:57.431 Removing: /var/run/dpdk/spdk_pid60970 00:34:57.431 Removing: /var/run/dpdk/spdk_pid61012 00:34:57.431 Removing: /var/run/dpdk/spdk_pid61101 00:34:57.431 Removing: /var/run/dpdk/spdk_pid61297 00:34:57.431 Removing: /var/run/dpdk/spdk_pid61392 00:34:57.431 Removing: /var/run/dpdk/spdk_pid61440 00:34:57.431 Removing: /var/run/dpdk/spdk_pid61895 00:34:57.431 Removing: /var/run/dpdk/spdk_pid61999 00:34:57.431 Removing: /var/run/dpdk/spdk_pid62119 00:34:57.431 Removing: /var/run/dpdk/spdk_pid62176 00:34:57.431 Removing: /var/run/dpdk/spdk_pid62203 00:34:57.431 Removing: /var/run/dpdk/spdk_pid62292 00:34:57.431 Removing: /var/run/dpdk/spdk_pid62934 00:34:57.431 Removing: /var/run/dpdk/spdk_pid62987 00:34:57.431 Removing: /var/run/dpdk/spdk_pid63473 00:34:57.431 Removing: /var/run/dpdk/spdk_pid63583 00:34:57.431 Removing: /var/run/dpdk/spdk_pid63703 00:34:57.431 Removing: /var/run/dpdk/spdk_pid63762 00:34:57.431 Removing: /var/run/dpdk/spdk_pid63794 00:34:57.691 Removing: /var/run/dpdk/spdk_pid63825 00:34:57.691 Removing: /var/run/dpdk/spdk_pid65717 00:34:57.691 Removing: /var/run/dpdk/spdk_pid65871 00:34:57.691 Removing: /var/run/dpdk/spdk_pid65876 00:34:57.691 Removing: /var/run/dpdk/spdk_pid65888 00:34:57.691 Removing: /var/run/dpdk/spdk_pid65933 00:34:57.691 Removing: /var/run/dpdk/spdk_pid65937 00:34:57.691 Removing: /var/run/dpdk/spdk_pid65949 00:34:57.691 Removing: /var/run/dpdk/spdk_pid65999 00:34:57.691 Removing: /var/run/dpdk/spdk_pid66003 00:34:57.691 Removing: /var/run/dpdk/spdk_pid66015 00:34:57.691 Removing: /var/run/dpdk/spdk_pid66060 00:34:57.691 Removing: /var/run/dpdk/spdk_pid66069 00:34:57.691 Removing: /var/run/dpdk/spdk_pid66081 00:34:57.691 Removing: /var/run/dpdk/spdk_pid67477 00:34:57.691 Removing: /var/run/dpdk/spdk_pid67596 00:34:57.691 Removing: /var/run/dpdk/spdk_pid69037 00:34:57.691 Removing: /var/run/dpdk/spdk_pid70405 00:34:57.691 Removing: /var/run/dpdk/spdk_pid70514 00:34:57.691 Removing: /var/run/dpdk/spdk_pid70618 00:34:57.691 Removing: /var/run/dpdk/spdk_pid70727 00:34:57.691 Removing: /var/run/dpdk/spdk_pid70859 00:34:57.691 Removing: /var/run/dpdk/spdk_pid70940 00:34:57.691 Removing: /var/run/dpdk/spdk_pid71095 00:34:57.691 Removing: /var/run/dpdk/spdk_pid71476 00:34:57.691 Removing: /var/run/dpdk/spdk_pid71518 00:34:57.691 Removing: /var/run/dpdk/spdk_pid71984 00:34:57.691 Removing: /var/run/dpdk/spdk_pid72175 00:34:57.691 Removing: /var/run/dpdk/spdk_pid72285 00:34:57.691 Removing: /var/run/dpdk/spdk_pid72401 00:34:57.691 Removing: /var/run/dpdk/spdk_pid72465 00:34:57.691 Removing: /var/run/dpdk/spdk_pid72496 00:34:57.691 Removing: /var/run/dpdk/spdk_pid72797 00:34:57.691 Removing: /var/run/dpdk/spdk_pid72868 00:34:57.691 Removing: /var/run/dpdk/spdk_pid72955 00:34:57.691 Removing: /var/run/dpdk/spdk_pid73398 00:34:57.691 Removing: /var/run/dpdk/spdk_pid73546 00:34:57.691 Removing: /var/run/dpdk/spdk_pid74358 00:34:57.691 Removing: /var/run/dpdk/spdk_pid74514 00:34:57.691 Removing: /var/run/dpdk/spdk_pid74734 00:34:57.691 Removing: /var/run/dpdk/spdk_pid74843 00:34:57.691 Removing: /var/run/dpdk/spdk_pid75198 00:34:57.691 Removing: /var/run/dpdk/spdk_pid75458 00:34:57.691 Removing: /var/run/dpdk/spdk_pid75823 00:34:57.691 Removing: /var/run/dpdk/spdk_pid76035 00:34:57.691 Removing: /var/run/dpdk/spdk_pid76185 00:34:57.691 Removing: /var/run/dpdk/spdk_pid76254 00:34:57.691 Removing: /var/run/dpdk/spdk_pid76414 00:34:57.691 Removing: /var/run/dpdk/spdk_pid76451 00:34:57.691 Removing: /var/run/dpdk/spdk_pid76527 00:34:57.691 Removing: /var/run/dpdk/spdk_pid76760 00:34:57.691 Removing: /var/run/dpdk/spdk_pid77007 00:34:57.691 Removing: /var/run/dpdk/spdk_pid77465 00:34:57.691 Removing: /var/run/dpdk/spdk_pid77937 00:34:57.691 Removing: /var/run/dpdk/spdk_pid78396 00:34:57.691 Removing: /var/run/dpdk/spdk_pid78923 00:34:57.950 Removing: /var/run/dpdk/spdk_pid79076 00:34:57.950 Removing: /var/run/dpdk/spdk_pid79163 00:34:57.950 Removing: /var/run/dpdk/spdk_pid79825 00:34:57.950 Removing: /var/run/dpdk/spdk_pid79895 00:34:57.951 Removing: /var/run/dpdk/spdk_pid80378 00:34:57.951 Removing: /var/run/dpdk/spdk_pid80769 00:34:57.951 Removing: /var/run/dpdk/spdk_pid81305 00:34:57.951 Removing: /var/run/dpdk/spdk_pid81438 00:34:57.951 Removing: /var/run/dpdk/spdk_pid81491 00:34:57.951 Removing: /var/run/dpdk/spdk_pid81556 00:34:57.951 Removing: /var/run/dpdk/spdk_pid81616 00:34:57.951 Removing: /var/run/dpdk/spdk_pid81680 00:34:57.951 Removing: /var/run/dpdk/spdk_pid81879 00:34:57.951 Removing: /var/run/dpdk/spdk_pid81964 00:34:57.951 Removing: /var/run/dpdk/spdk_pid82031 00:34:57.951 Removing: /var/run/dpdk/spdk_pid82115 00:34:57.951 Removing: /var/run/dpdk/spdk_pid82157 00:34:57.951 Removing: /var/run/dpdk/spdk_pid82234 00:34:57.951 Removing: /var/run/dpdk/spdk_pid82405 00:34:57.951 Removing: /var/run/dpdk/spdk_pid82641 00:34:57.951 Removing: /var/run/dpdk/spdk_pid83121 00:34:57.951 Removing: /var/run/dpdk/spdk_pid83586 00:34:57.951 Removing: /var/run/dpdk/spdk_pid84067 00:34:57.951 Removing: /var/run/dpdk/spdk_pid84543 00:34:57.951 Clean 00:34:57.951 12:45:21 -- common/autotest_common.sh@1451 -- # return 0 00:34:57.951 12:45:21 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:34:57.951 12:45:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:57.951 12:45:21 -- common/autotest_common.sh@10 -- # set +x 00:34:57.951 12:45:21 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:34:57.951 12:45:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:57.951 12:45:21 -- common/autotest_common.sh@10 -- # set +x 00:34:58.210 12:45:21 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:58.210 12:45:21 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:58.210 12:45:21 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:58.210 12:45:21 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:58.210 12:45:21 -- spdk/autotest.sh@394 -- # hostname 00:34:58.210 12:45:21 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:58.210 geninfo: WARNING: invalid characters removed from testname! 00:35:24.842 12:45:45 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:24.842 12:45:48 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:27.374 12:45:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:29.279 12:45:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:31.184 12:45:54 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:33.091 12:45:56 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:34.998 12:45:58 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:34.998 12:45:58 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:35:34.998 12:45:58 -- common/autotest_common.sh@1681 -- $ lcov --version 00:35:34.998 12:45:58 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:35:35.257 12:45:58 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:35:35.257 12:45:58 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:35:35.257 12:45:58 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:35:35.257 12:45:58 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:35:35.257 12:45:58 -- scripts/common.sh@336 -- $ IFS=.-: 00:35:35.257 12:45:58 -- scripts/common.sh@336 -- $ read -ra ver1 00:35:35.257 12:45:58 -- scripts/common.sh@337 -- $ IFS=.-: 00:35:35.257 12:45:58 -- scripts/common.sh@337 -- $ read -ra ver2 00:35:35.257 12:45:58 -- scripts/common.sh@338 -- $ local 'op=<' 00:35:35.257 12:45:58 -- scripts/common.sh@340 -- $ ver1_l=2 00:35:35.257 12:45:58 -- scripts/common.sh@341 -- $ ver2_l=1 00:35:35.257 12:45:58 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:35:35.257 12:45:58 -- scripts/common.sh@344 -- $ case "$op" in 00:35:35.257 12:45:58 -- scripts/common.sh@345 -- $ : 1 00:35:35.257 12:45:58 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:35:35.257 12:45:58 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:35.257 12:45:58 -- scripts/common.sh@365 -- $ decimal 1 00:35:35.257 12:45:58 -- scripts/common.sh@353 -- $ local d=1 00:35:35.257 12:45:58 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:35:35.257 12:45:58 -- scripts/common.sh@355 -- $ echo 1 00:35:35.257 12:45:58 -- scripts/common.sh@365 -- $ ver1[v]=1 00:35:35.257 12:45:58 -- scripts/common.sh@366 -- $ decimal 2 00:35:35.257 12:45:58 -- scripts/common.sh@353 -- $ local d=2 00:35:35.257 12:45:58 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:35:35.257 12:45:58 -- scripts/common.sh@355 -- $ echo 2 00:35:35.257 12:45:58 -- scripts/common.sh@366 -- $ ver2[v]=2 00:35:35.257 12:45:58 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:35:35.257 12:45:58 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:35:35.257 12:45:58 -- scripts/common.sh@368 -- $ return 0 00:35:35.257 12:45:58 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:35.257 12:45:58 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:35:35.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.257 --rc genhtml_branch_coverage=1 00:35:35.257 --rc genhtml_function_coverage=1 00:35:35.257 --rc genhtml_legend=1 00:35:35.257 --rc geninfo_all_blocks=1 00:35:35.257 --rc geninfo_unexecuted_blocks=1 00:35:35.257 00:35:35.257 ' 00:35:35.257 12:45:58 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:35:35.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.257 --rc genhtml_branch_coverage=1 00:35:35.257 --rc genhtml_function_coverage=1 00:35:35.257 --rc genhtml_legend=1 00:35:35.257 --rc geninfo_all_blocks=1 00:35:35.257 --rc geninfo_unexecuted_blocks=1 00:35:35.257 00:35:35.257 ' 00:35:35.257 12:45:58 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:35:35.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.257 --rc genhtml_branch_coverage=1 00:35:35.257 --rc genhtml_function_coverage=1 00:35:35.257 --rc genhtml_legend=1 00:35:35.257 --rc geninfo_all_blocks=1 00:35:35.257 --rc geninfo_unexecuted_blocks=1 00:35:35.257 00:35:35.257 ' 00:35:35.257 12:45:58 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:35:35.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:35.257 --rc genhtml_branch_coverage=1 00:35:35.257 --rc genhtml_function_coverage=1 00:35:35.257 --rc genhtml_legend=1 00:35:35.257 --rc geninfo_all_blocks=1 00:35:35.257 --rc geninfo_unexecuted_blocks=1 00:35:35.257 00:35:35.257 ' 00:35:35.257 12:45:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:35.257 12:45:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:35:35.257 12:45:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:35.257 12:45:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:35.257 12:45:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:35.257 12:45:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.257 12:45:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.257 12:45:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.257 12:45:58 -- paths/export.sh@5 -- $ export PATH 00:35:35.257 12:45:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.257 12:45:58 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:35:35.257 12:45:58 -- common/autobuild_common.sh@486 -- $ date +%s 00:35:35.257 12:45:58 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728305158.XXXXXX 00:35:35.257 12:45:58 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728305158.yCmmwK 00:35:35.257 12:45:58 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:35:35.257 12:45:58 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:35:35.257 12:45:58 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:35:35.257 12:45:58 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:35:35.257 12:45:58 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:35:35.257 12:45:58 -- common/autobuild_common.sh@502 -- $ get_config_params 00:35:35.257 12:45:58 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:35:35.257 12:45:58 -- common/autotest_common.sh@10 -- $ set +x 00:35:35.257 12:45:58 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:35:35.257 12:45:58 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:35:35.257 12:45:58 -- pm/common@17 -- $ local monitor 00:35:35.257 12:45:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:35.257 12:45:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:35.257 12:45:58 -- pm/common@25 -- $ sleep 1 00:35:35.257 12:45:58 -- pm/common@21 -- $ date +%s 00:35:35.257 12:45:58 -- pm/common@21 -- $ date +%s 00:35:35.257 12:45:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728305158 00:35:35.257 12:45:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728305158 00:35:35.257 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728305158_collect-vmstat.pm.log 00:35:35.257 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728305158_collect-cpu-load.pm.log 00:35:36.195 12:45:59 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:35:36.195 12:45:59 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:35:36.195 12:45:59 -- spdk/autopackage.sh@14 -- $ timing_finish 00:35:36.195 12:45:59 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:36.195 12:45:59 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:36.195 12:45:59 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:36.195 12:45:59 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:36.195 12:45:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:36.195 12:45:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:36.195 12:45:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:36.195 12:45:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:35:36.195 12:45:59 -- pm/common@44 -- $ pid=86280 00:35:36.195 12:45:59 -- pm/common@50 -- $ kill -TERM 86280 00:35:36.195 12:45:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:36.195 12:45:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:35:36.195 12:45:59 -- pm/common@44 -- $ pid=86282 00:35:36.195 12:45:59 -- pm/common@50 -- $ kill -TERM 86282 00:35:36.195 + [[ -n 5241 ]] 00:35:36.195 + sudo kill 5241 00:35:36.464 [Pipeline] } 00:35:36.480 [Pipeline] // timeout 00:35:36.485 [Pipeline] } 00:35:36.499 [Pipeline] // stage 00:35:36.504 [Pipeline] } 00:35:36.517 [Pipeline] // catchError 00:35:36.526 [Pipeline] stage 00:35:36.528 [Pipeline] { (Stop VM) 00:35:36.541 [Pipeline] sh 00:35:36.825 + vagrant halt 00:35:39.359 ==> default: Halting domain... 00:35:45.940 [Pipeline] sh 00:35:46.221 + vagrant destroy -f 00:35:48.791 ==> default: Removing domain... 00:35:49.370 [Pipeline] sh 00:35:49.656 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:35:49.672 [Pipeline] } 00:35:49.678 [Pipeline] // stage 00:35:49.681 [Pipeline] } 00:35:49.688 [Pipeline] // dir 00:35:49.692 [Pipeline] } 00:35:49.700 [Pipeline] // wrap 00:35:49.703 [Pipeline] } 00:35:49.710 [Pipeline] // catchError 00:35:49.715 [Pipeline] stage 00:35:49.716 [Pipeline] { (Epilogue) 00:35:49.723 [Pipeline] sh 00:35:50.001 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:55.297 [Pipeline] catchError 00:35:55.298 [Pipeline] { 00:35:55.306 [Pipeline] sh 00:35:55.583 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:55.583 Artifacts sizes are good 00:35:55.592 [Pipeline] } 00:35:55.605 [Pipeline] // catchError 00:35:55.614 [Pipeline] archiveArtifacts 00:35:55.622 Archiving artifacts 00:35:55.735 [Pipeline] cleanWs 00:35:55.747 [WS-CLEANUP] Deleting project workspace... 00:35:55.747 [WS-CLEANUP] Deferred wipeout is used... 00:35:55.753 [WS-CLEANUP] done 00:35:55.755 [Pipeline] } 00:35:55.770 [Pipeline] // stage 00:35:55.776 [Pipeline] } 00:35:55.789 [Pipeline] // node 00:35:55.795 [Pipeline] End of Pipeline 00:35:55.821 Finished: SUCCESS